id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
395,320 | https://en.wikipedia.org/wiki/Chromatic%20adaptation | Chromatic adaptation is the human visual system’s ability to adjust to changes in illumination in order to preserve the appearance of object colors. It is responsible for the stable appearance of object colors despite the wide variation of light which might be reflected from an object and observed by our eyes. A chromatic adaptation transform (CAT) function emulates this important aspect of color perception in color appearance models.
An object may be viewed under various conditions. For example, it may be illuminated by sunlight, the light of a fire, or a harsh electric light. In all of these situations, human vision perceives that the object has the same color: a red apple always appears red, whether viewed during the day or at night (if the red apple is illuminated as rods in our eyes do not see red). On the other hand, a camera with no adjustment for light may register the apple as having varying color. This feature of the visual system is called chromatic adaptation, or color constancy; when the correction occurs in a camera it is referred to as white balance.
Though the human visual system generally does maintain constant perceived color under different lighting, there are situations where the relative brightness of two different stimuli will appear reversed at different illuminance levels. For example, the bright yellow petals of flowers will appear dark compared to the green leaves in dim light while the opposite is true during the day. This is known as the Purkinje effect, and arises because the peak sensitivity of the human eye shifts toward the blue end of the spectrum at lower light levels.
Von Kries transform
The von Kries chromatic adaptation method is a technique that is sometimes used in camera image processing. The method is to apply a gain to each of the human cone cell spectral sensitivity responses so as to keep the adapted appearance of the reference white constant. The application of Johannes von Kries's idea of adaptive gains on the three cone cell types was first explicitly applied to the problem of color constancy by Herbert E. Ives, and the method is sometimes referred to as the Ives transform or the von Kries–Ives adaptation.
The von Kries coefficient rule rests on the assumption that color constancy is achieved by individually adapting the gains of the three cone responses, the gains depending on the sensory context, that is, the color history and surround. Thus, the cone responses from two radiant spectra can be matched by appropriate choice of diagonal adaptation matrices D1 and D2:
where is the cone sensitivity matrix and is the spectrum of the conditioning stimulus. This leads to the von Kries transform for chromatic adaptation in LMS color space (responses of long-, medium-, and short-wavelength cone response space):
This diagonal matrix D maps cone responses, or colors, in one adaptation state to corresponding colors in another; when the adaptation state is presumed to be determined by the illuminant, this matrix is useful as an illuminant adaptation transform. The elements of the diagonal matrix D are the ratios of the cone responses (Long, Medium, Short) for the illuminant's white point.
The more complete von Kries transform, for colors represented in XYZ or RGB color space, includes matrix transformations into and out of LMS space, with the diagonal transform D in the middle.
CIE color appearance models
The International Commission on Illumination (CIE) has published a set of color appearance models, most of which included a color adaptation function. CIE L*a*b* (CIELAB) performs a "simple" von Kries-type transform in XYZ color space, while CIELUV uses a Judd-type (translational) white point adaptation. Two revisions of more comprehensive color appearance models, CIECAM97s and CIECAM02, each included a CAT function, CMCCAT97 and CAT02 respectively. CAT02's predecessor is a simplified version of CMCCAT97 known as CMCCAT2000.
References
Further reading
External links
Color Balancing Algorithms
Chromatic Adaptation Evaluation
Color appearance phenomena | Chromatic adaptation | [
"Physics"
] | 825 | [
"Optical phenomena",
"Physical phenomena",
"Color appearance phenomena"
] |
395,375 | https://en.wikipedia.org/wiki/Activated%20carbon | Activated carbon, also called activated charcoal, is a form of carbon commonly used to filter contaminants from water and air, among many other uses. It is processed (activated) to have small, low-volume pores that greatly increase the surface area available for adsorption or chemical reactions. (Adsorption, not to be confused with absorption, is a process where atoms or molecules adhere to a surface). The pores can be thought of as a microscopic "sponge" structure. Activation is analogous to making popcorn from dried corn kernels: popcorn is light, fluffy, and its kernels have a high surface-area-to-volume ratio. Activated is sometimes replaced by active.
Because it is so porous on a microscopic scale, one gram of activated carbon has a surface area of over , as determined by gas adsorption. For charcoal, the equivalent figure before activation is about . A useful activation level may be obtained solely from high surface area. Further chemical treatment often enhances adsorption properties.
Activated carbon is usually derived from waste products such as coconut husks; waste from paper mills has been studied as a source. These bulk sources are converted into charcoal before being activated. When derived from coal, it is referred to as activated coal. Activated coke is derived from coke.
Uses
Activated carbon is used in methane and hydrogen storage, air purification, capacitive deionization, supercapacitive swing adsorption, solvent recovery, decaffeination, gold purification, metal extraction, water purification, medicine, sewage treatment, air filters in respirators, filters in compressed air, teeth whitening, production of hydrogen chloride, edible electronics, and many other applications.
Industrial
One major industrial application involves use of activated carbon in metal finishing for purification of electroplating solutions. For example, it is the main purification technique for removing organic impurities from bright nickel plating solutions. A variety of organic chemicals are added to plating solutions for improving their deposit qualities and for enhancing properties like brightness, smoothness, ductility, etc. Due to passage of direct current and electrolytic reactions of anodic oxidation and cathodic reduction, organic additives generate unwanted breakdown products in solution. Their excessive build up can adversely affect plating quality and physical properties of deposited metal. Activated carbon treatment removes such impurities and restores plating performance to the desired level.
Medical
Activated carbon is used to treat poisonings and overdoses following oral ingestion. Tablets or capsules of activated carbon are used in many countries as an over-the-counter drug to treat diarrhea, indigestion, and flatulence. However, activated charcoal shows no effect on intestinal gas and diarrhea, is ordinarily medically ineffective if poisoning resulted from ingestion of corrosive agents, boric acid, or petroleum products, and is particularly ineffective against poisonings of strong acids or bases, cyanide, iron, lithium, arsenic, methanol, ethanol, or ethylene glycol. Activated carbon will not prevent these chemicals from being absorbed into the human body. It is on the World Health Organization's List of Essential Medicines.
Incorrect application (e.g. into the lungs) results in pulmonary aspiration, which can sometimes be fatal if immediate medical treatment is not initiated.
Analytical chemistry
Activated carbon, in 50% w/w combination with celite, is used as stationary phase in low-pressure chromatographic separation of carbohydrates (mono-, di-, tri-saccharides) using ethanol solutions (5–50%) as mobile phase in analytical or preparative protocols.
Activated carbon is useful for extracting the direct oral anticoagulants (DOACs) such as dabigatran, apixaban, rivaroxaban and edoxaban from blood plasma samples. For this purpose it has been made into "minitablets", each containing 5 mg activated carbon for treating 1ml samples of DOAC. Since this activated carbon has no effect on blood clotting factors, heparin or most other anticoagulants this allows a plasma sample to be analyzed for abnormalities otherwise affected by the DOACs.
Environmental
Carbon adsorption has numerous applications in removing pollutants from air or water streams both in the field and in industrial processes such as:
Spill cleanup
Groundwater remediation
Drinking water filtration
Wastewater treatment
Air purification
Volatile organic compounds capture from painting, dry cleaning, gasoline dispensing operations, and other processes
Volatile organic compounds recovery (SRU, Solvent Recovery Unit; SRP, Solvent Recovery Plant; SRS, Solvent Recovery System) from flexible packaging, converting, coating, and other processes.
During early implementation of the 1974 Safe Drinking Water Act in the US, EPA officials developed a rule that proposed requiring drinking water treatment systems to use granular activated carbon. Because of its high cost, the so-called GAC rule encountered strong opposition across the country from the water supply industry, including the largest water utilities in California. Hence, the agency set aside the rule. Activated carbon filtration is an effective water treatment method due to its multi-functional nature. There are specific types of activated carbon filtration methods and equipment that are indicated – depending upon the contaminants involved.
Activated carbon is also used for the measurement of radon concentration in air.
Biomass waste-derived activated carbons were also successfully used for the removal of caffeine and paracetamol from water.
Agricultural
Activated carbon (charcoal) is an allowed substance used by organic farmers in both livestock production and wine making. In livestock production it is used as a pesticide, animal feed additive, processing aid, nonagricultural ingredient and disinfectant. In organic winemaking, activated carbon is allowed for use as a processing agent to adsorb brown color pigments from white grape concentrates.
It is sometimes used as biochar.
Distilled alcoholic beverage purification
Activated carbon filters (AC filters) can be used to filter vodka and whiskey of organic impurities which can affect color, taste, and odor. Passing an organically impure vodka through an activated carbon filter at the proper flow rate will result in vodka with an identical alcohol content and significantly increased organic purity, as judged by odor and taste.
Fuel storage
Research is being done testing various activated carbons' ability to store natural gas and hydrogen gas. The porous material acts like a sponge for different types of gases. The gas is attracted to the carbon material via Van der Waals forces. Some carbons have been able to achieve binding energies of 5–10 kJ per mol. The gas may then be desorbed when subjected to higher temperatures and either combusted to do work or in the case of hydrogen gas extracted for use in a hydrogen fuel cell. Gas storage in activated carbons is an appealing gas storage method because the gas can be stored in a low pressure, low mass, low volume environment that would be much more feasible than bulky on-board pressure tanks in vehicles. The United States Department of Energy has specified certain goals to be achieved in the area of research and development of nano-porous carbon materials. All of the goals are yet to be satisfied but numerous institutions, including the ALL-CRAFT program, are continuing to conduct work in this field.
Gas purification
Filters with activated carbon are usually used in compressed air and gas purification to remove oil vapors, odor, and other hydrocarbons from the air. The most common designs use a 1-stage or 2 stage filtration principle in which activated carbon is embedded inside the filter media.
Activated carbon filters are used to retain radioactive gases within the air vacuumed from a nuclear boiling water reactor turbine condenser. The large charcoal beds adsorb these gases and retain them while they rapidly decay to nonradioactive solid species. The solids are trapped in the charcoal particles, while the filtered air passes through.
Chemical purification
Activated carbon is commonly used on the laboratory scale to purify solutions of organic molecules containing unwanted colored organic impurities.
Filtration over activated carbon is used in large scale fine chemical and pharmaceutical processes for the same purpose. The carbon is either mixed with the solution then filtered off or immobilized in a filter.
Mercury scrubbing
Activated carbon, often infused with sulfur or iodine, is widely used to trap mercury emissions from coal-fired power stations, medical incinerators, and from natural gas at the wellhead. However, despite its effectiveness, activated carbon is expensive to use.
Since it is often not recycled, the mercury-laden activated carbon presents a disposal dilemma. If the activated carbon contains less than 260 ppm mercury, United States federal regulations allow it to be stabilized (for example, trapped in concrete) for landfilling. However, waste containing greater than 260 ppm is considered to be in the high-mercury subcategory and is banned from landfilling (Land-Ban Rule). This material is now accumulating in warehouses and in deep abandoned mines at an estimated rate of 100 tons per year.
The problem of disposal of mercury-laden activated carbon is not unique to the United States. In the Netherlands, this mercury is largely recovered and the activated carbon is disposed of by complete burning, forming carbon dioxide ().
Food additive
Activated, food-grade charcoal became a food trend in 2016, being used as an additive to impart a "slightly smoky" taste and a dark coloring to products including hotdogs, ice cream, pizza bases, and bagels. People taking medication, including birth control pills and antidepressants, are advised to avoid novelty foods or drinks that use activated charcoal coloring since it can render the medication ineffective.
Smoking filtration
Activated charcoal is used in smoking filters as a way to reduce the tar content and other chemicals present in smoke, which is a result of combustion, wherein it has been found to reduce the toxicants from tobacco smoke, in particular the free radicals.
Structure of activated carbon
The structure of activated carbon has long been a subject of debate. In a book published in 2006, Harry Marsh and Francisco Rodríguez-Reinoso considered more than 15 models for the structure, without coming to a definite conclusion about which was correct. Recent work using aberration-corrected transmission electron microscopy has suggested that activated carbons may have a structure related to that of the fullerenes, with pentagonal and heptagonal carbon rings.
Production
Activated carbon is carbon produced from carbonaceous source materials such as bamboo, coconut husk, willow peat, wood, coir, lignite, coal, and petroleum pitch. It can be produced (activated) by one of the following processes:
Physical activation: The source material is developed into activated carbon using hot gases. Air is then introduced to burn out the gasses, creating a graded, screened and de-dusted form of activated carbon. This is generally done by using one or more of the following processes:
Carbonization: Material with carbon content is pyrolyzed at temperatures in the range 600–900 °C, usually in an inert atmosphere with gases such as argon or nitrogen
Activation/oxidation: Raw material or carbonized material is exposed to oxidizing atmospheres (oxygen or steam) at temperatures above 250 °C, usually in the temperature range of 600–1200 °C. The activation is performed by heating the sample for 1 h in a muffle furnace at 450 °C in the presence of air.
Chemical activation: The carbon material is impregnated with certain chemicals. The chemical is typically an acid, strong base, or a salt (phosphoric acid 25%, potassium hydroxide 5%, sodium hydroxide 5%, potassium carbonate 5%, calcium chloride 25%, and zinc chloride 25%). The carbon is then subjected to high temperatures (250–600 °C). It is believed that the temperature activates the carbon at this stage by forcing the material to open up and have more microscopic pores. Chemical activation is preferred to physical activation owing to the lower temperatures, better quality consistency, and shorter time needed for activating the material.
The Dutch company Norit NV, part of the Cabot Corporation, is the largest producer of activated carbon in the world. Haycarb, a Sri Lankan coconut shell-based company, controls 16% of the global market share.
Classification
Activated carbons are complex products which are difficult to classify on the basis of their behaviour, surface characteristics and other fundamental criteria. However, some broad classification is made for general purposes based on their size, preparation methods, and industrial applications.
Powdered activated carbon (PAC)
Normally, activated carbons (R 1) are made in particulate form as powders or fine granules less than 1.0 mm in size with an average diameter between 0.15 and 0.25 mm. Thus they present a large surface to volume ratio with a small diffusion distance. Activated carbon (R 1) is defined as the activated carbon particles retained on a 50-mesh sieve (0.297 mm).
Powdered activated carbon (PAC) material is finer material. PAC is made up of crushed or ground carbon particles, 95–100% of which will pass through a designated mesh sieve. The ASTM classifies particles passing through an 80-mesh sieve (0.177 mm) and smaller as PAC. It is not common to use PAC in a dedicated vessel, due to the high head loss that would occur. Instead, PAC is generally added directly to other process units, such as raw water intakes, rapid mix basins, clarifiers, and gravity filters.
Granular activated carbon (GAC)
Granular activated carbon (GAC) has a relatively larger particle size compared to powdered activated carbon and consequently, presents a smaller external surface. Diffusion of the adsorbate is thus an important factor. These carbons are suitable for adsorption of gases and vapors, because gaseous substances diffuse rapidly. Granulated carbons are used for air filtration and water treatment, as well as for general deodorization and separation of components in flow systems and in rapid mix basins. GAC can be obtained in either granular or extruded form. GAC is designated by sizes such as 8×20, 20×40, or 8×30 for liquid phase applications and 4×6, 4×8 or 4×10 for vapor phase applications. A 20×40 carbon is made of particles that will pass through a U.S. Standard Mesh Size No. 20 sieve (0.84 mm) (generally specified as 85% passing) but be retained on a U.S. Standard Mesh Size No. 40 sieve (0.42 mm) (generally specified as 95% retained). AWWA (1992) B604 uses the 50-mesh sieve (0.297 mm) as the minimum GAC size. The most popular aqueous-phase carbons are the 12×40 and 8×30 sizes because they have a good balance of size, surface area, and head loss characteristics.
Extruded activated carbon (EAC)
Extruded activated carbon (EAC) combines powdered activated carbon with a binder, which are fused together and extruded into a cylindrical shaped activated carbon block with diameters from 0.8 to 130 mm. These are mainly used for gas phase applications because of their low pressure drop, high mechanical strength and low dust content. Also sold as CTO filter (Chlorine, Taste, Odor).
Bead activated carbon (BAC)
Bead activated carbon (BAC) is made from petroleum pitch and supplied in diameters from approximately 0.35 to 0.80 mm. Similar to EAC, it is also noted for its low pressure drop, high mechanical strength and low dust content, but with a smaller grain size. Its spherical shape makes it preferred for fluidized bed applications such as water filtration.
Impregnated carbon
Porous carbons containing several types of inorganic impregnate such as iodine and silver. Cations such as aluminium, manganese, zinc, iron, lithium, and calcium have also been prepared for specific application in air pollution control especially in museums and galleries. Due to its antimicrobial and antiseptic properties, silver loaded activated carbon is used as an adsorbent for purification of domestic water. Drinking water can be obtained from natural water by treating the natural water with a mixture of activated carbon and aluminium hydroxide (Al(OH)3), a flocculating agent. Impregnated carbons are also used for the adsorption of hydrogen sulfide (H2S) and thiols. Adsorption rates for H2S as high as 50% by weight have been reported.
Polymer coated carbon
This is a process by which a porous carbon can be coated with a biocompatible polymer to give a smooth and permeable coat without blocking the pores. The resulting carbon is useful for hemoperfusion. Hemoperfusion is a treatment technique in which large volumes of the patient's blood are passed over an adsorbent substance in order to remove toxic substances from the blood.
Woven carbon
There is a technology of processing technical rayon fiber into activated carbon cloth for carbon filtering. Adsorption capacity of activated cloth is greater than that of activated charcoal (BET theory) surface area: 500–1500 m2/g, pore volume: 0.3–0.8 cm3/g). Thanks to the different forms of activated material, it can be used in a wide range of applications (supercapacitors, odor absorbers, CBRN-defense industry etc.).
Properties
A gram of activated carbon can have a surface area in excess of , with being readily achievable. Carbon aerogels, while more expensive, have even higher surface areas, and are used in special applications.
Under an electron microscope, the high surface-area structures of activated carbon are revealed. Individual particles are intensely convoluted and display various kinds of porosity; there may be many areas where flat surfaces of graphite-like material run parallel to each other, separated by only a few nanometres or so. These micropores provide superb conditions for adsorption to occur, since adsorbing material can interact with many surfaces simultaneously. Tests of adsorption behaviour are usually done with nitrogen gas at 77 K under high vacuum, but in everyday terms activated carbon is perfectly capable of producing the equivalent, by adsorption from its environment, liquid water from steam at and a pressure of 1/10,000 of an atmosphere.
James Dewar, the scientist after whom the Dewar (vacuum flask) is named, spent much time studying activated carbon and published a paper regarding its adsorption capacity with regard to gases. In this paper, he discovered that cooling the carbon to liquid nitrogen temperatures allowed it to adsorb significant quantities of numerous air gases, among others, that could then be recollected by simply allowing the carbon to warm again and that coconut-based carbon was superior for the effect. He uses oxygen as an example, wherein the activated carbon would typically adsorb the atmospheric concentration (21%) under standard conditions, but release over 80% oxygen if the carbon was first cooled to low temperatures.
Physically, activated carbon binds materials by van der Waals force or London dispersion force.
Activated carbon does not bind well to certain chemicals, including alcohols, diols, strong acids and bases, metals and most inorganics, such as lithium, sodium, iron, lead, arsenic, fluorine, and boric acid.
Activated carbon adsorbs iodine very well. The iodine capacity, mg/g, (ASTM D28 Standard Method test) may be used as an indication of total surface area.
Carbon monoxide is not well adsorbed by activated carbon. This should be of particular concern to those using the material in filters for respirators, fume hoods, or other gas control systems because the gas is undetectable to the human senses, toxic to the metabolism, and neurotoxic.
Substantial lists of the common industrial and agricultural gases adsorbed by activated carbon can be found online.
Activated carbon can be used as a substrate for the application of various chemicals to improve the adsorptive capacity for some inorganic (and problematic organic) compounds such as hydrogen sulfide (H2S), ammonia (NH3), formaldehyde (HCOH), mercury (Hg) and radioactive iodine-131(131I). This property is known as chemisorption.
Iodine number
Many carbons preferentially adsorb small molecules. Iodine number is the most fundamental parameter used to characterize activated carbon performance.
It is a measure of activity level (higher number indicates higher degree of activation) often reported in mg/g (typical range 500–1200 mg/g).
It is a measure of the micropore content of the activated carbon (0 to 20 Å, or up to 2 nm) by adsorption of iodine from solution.
It is equivalent to surface area of carbon between 900 and 1100 m2/g.
It is the standard measure for liquid-phase applications.
Iodine number is defined as the milligrams of iodine adsorbed by one gram of carbon when the iodine concentration in the residual filtrate is at a concentration of 0.02 normal (i.e. 0.02N). Basically, iodine number is a measure of the iodine adsorbed in the pores and, as such, is an indication of the pore volume available in the activated carbon of interest. Typically, water-treatment carbons have iodine numbers ranging from 600 to 1100. Frequently, this parameter is used to determine the degree of exhaustion of a carbon in use. However, this practice should be viewed with caution, as chemical interactions with the adsorbate may affect the iodine uptake, giving false results. Thus, the use of iodine number as a measure of the degree of exhaustion of a carbon bed can only be recommended if it has been shown to be free of chemical interactions with adsorbates and if an experimental correlation between iodine number and the degree of exhaustion has been determined for the particular application.
Molasses
Some carbons are more adept at adsorbing large molecules.
Molasses number or molasses efficiency is a measure of the mesopore content of the activated carbon (greater than 20 Å, or larger than 2 nm) by adsorption of molasses from solution.
A high molasses number indicates a high adsorption of big molecules (range 95–600). Caramel dp (decolorizing performance) is similar to molasses number. Molasses efficiency is reported as a percentage (range 40%–185%) and parallels molasses number (600 = 185%, 425 = 85%).
The European molasses number (range 525–110) is inversely related to the North American molasses number.
Molasses Number is a measure of the degree of decolorization of a standard molasses solution that has been diluted and standardized against standardized activated carbon. Due to the size of color bodies, the molasses number represents the potential pore volume available for larger adsorbing species. As all of the pore volume may not be available for adsorption in a particular waste water application, and as some of the adsorbate may enter smaller pores, it is not a good measure of the worth of a particular activated carbon for a specific application. Frequently, this parameter is useful in evaluating a series of active carbons for their rates of adsorption. Given two active carbons with similar pore volumes for adsorption, the one having the higher molasses number will usually have larger feeder pores resulting in more efficient transfer of adsorbate into the adsorption space.
Tannin
Tannins are a mixture of large and medium size molecules.
Carbons with a combination of macropores and mesopores adsorb tannins.
The ability of a carbon to adsorb tannins is reported in parts per million concentration (range 200 ppm–362 ppm).
Methylene blue
Some carbons have a mesopore (20 Å to 50 Å, or 2 to 5 nm) structure which adsorbs medium size molecules, such as the dye methylene blue.
Methylene blue adsorption is reported in g/100g (range 11–28 g/100g).
Dechlorination
Some carbons are evaluated based on the dechlorination half-life length, which measures the chlorine-removal efficiency of activated carbon. The dechlorination half-value length is the depth of carbon required to reduce the chlorine concentration by 50%. A lower half-value length indicates superior performance.
Apparent density
The solid or skeletal density of activated carbons will typically range between 2000 and 2100 kg/m3 (125–130 lbs./cubic foot). However, a large part of an activated carbon sample will consist of air space between particles, and the actual or apparent density will therefore be lower, typically 400 to 500 kg/m3 (25–31 lbs./cubic foot).
Higher density provides greater volume activity and normally indicates better-quality activated carbon.
ASTM D 2854 -09 (2014) is used to determine the apparent density of activated carbon.
Hardness/abrasion number
It is a measure of the activated carbon's resistance to attrition.
It is an important indicator of activated carbon to maintain its physical integrity and withstand frictional forces. There are large differences in the hardness of activated carbons, depending on the raw material and activity levels (porosity).
Ash content
Ash reduces the overall activity of activated carbon and reduces the efficiency of reactivation. The amount is exclusively dependent on the base raw material used to produce the activated carbon (e.g., coconut, wood, coal, etc.).
The metal oxides (Fe2O3) can leach out of activated carbon resulting in discoloration. Acid/water-soluble ash content is more significant than total ash content. Soluble ash content can be very important for aquarists, as ferric oxide can promote algal growths. A carbon with a low soluble ash content should be used for marine, freshwater fish and reef tanks to avoid heavy metal poisoning and excess plant/algal growth.
ASTM (D2866 Standard Method test) is used to determine the ash content of activated carbon.
Carbon tetrachloride activity
Measurement of the porosity of an activated carbon by the adsorption of saturated carbon tetrachloride vapour.
Particle size distribution
The finer the particle size of an activated carbon, the better the access to the surface area and the faster the rate of adsorption kinetics. In vapour phase systems this needs to be considered against pressure drop, which will affect energy cost. Careful consideration of particle size distribution can provide significant operating benefits.
However, in the case of using activated carbon for adsorption of minerals such as gold, the particle size should be in the range of . Activated carbon with particle size less than 1 mm would not be suitable for elution (the stripping of mineral from an activated carbon).
Modification of properties and reactivity
Acid-base, oxidation-reduction and specific adsorption characteristics are strongly dependent on the composition of the surface functional groups.
The surface of conventional activated carbon is reactive, capable of oxidation by atmospheric oxygen and oxygen plasma steam, and also carbon dioxide and ozone.
Oxidation in the liquid phase is caused by a wide range of reagents (HNO3, H2O2, KMnO4).
Through the formation of a large number of basic and acidic groups on the surface of oxidized carbon to sorption and other properties can differ significantly from the unmodified forms.
Activated carbon can be nitrogenated by natural products or polymers or processing of carbon with nitrogenating reagents.
Activated carbon can interact with chlorine, bromine and fluorine.
Surface of activated carbon, like other carbon materials can be fluoralkylated by treatment with (per)fluoropolyether peroxide in a liquid phase, or with wide range of fluoroorganic substances by CVD-method. Such materials combine high hydrophobicity and chemical stability with electrical and thermal conductivity and can be used as electrode material for super capacitors.
Sulfonic acid functional groups can be attached to activated carbon to give "starbons" which can be used to selectively catalyse the esterification of fatty acids. Formation of such activated carbons from halogenated precursors gives a more effective catalyst which is thought to be a result of remaining halogens improving stability. It is reported about synthesis of activated carbon with chemically grafted superacid sites –CF2SO3H.
Some of the chemical properties of activated carbon have been attributed to presence of the surface active carbon double bond.
The Polyani adsorption theory is a popular method for analyzing adsorption of various organic substances to their surface.
Examples of adsorption
Heterogeneous catalysis
The most commonly encountered form of chemisorption in industry, occurs when a solid catalyst interacts with a gaseous feedstock, the reactant/s. The adsorption of reactant/s to the catalyst surface creates a chemical bond, altering the electron density around the reactant molecule and allowing it to undergo reactions that would not normally be available to it.
Reactivation and regeneration
The reactivation or the regeneration of activated carbons involves restoring the adsorptive capacity of saturated activated carbon by desorbing adsorbed contaminants on the activated carbon surface.
Thermal reactivation
The most common regeneration technique employed in industrial processes is thermal reactivation. The thermal regeneration process generally follows three steps:
Adsorbent drying at approximately
High temperature desorption and decomposition () under an inert atmosphere
Residual organic gasification by a non-oxidising gas (steam or carbon dioxide) at elevated temperatures ()
The heat treatment stage utilises the exothermic nature of adsorption and results in desorption, partial cracking and polymerization of the adsorbed organics. The final step aims to remove charred organic residue formed in the porous structure in the previous stage and re-expose the porous carbon structure regenerating its original surface characteristics. After treatment the adsorption column can be reused. Per adsorption-thermal regeneration cycle between 5–15 wt% of the carbon bed is burnt off resulting in a loss of adsorptive capacity. Thermal regeneration is a high energy process due to the high required temperatures making it both an energetically and commercially expensive process. Plants that rely on thermal regeneration of activated carbon have to be of a certain size before it is economically viable to have regeneration facilities onsite. As a result, it is common for smaller waste treatment sites to ship their activated carbon cores to specialised facilities for regeneration.
Other regeneration techniques
Current concerns with the high energy/cost nature of thermal regeneration of activated carbon has encouraged research into alternative regeneration methods to reduce the environmental impact of such processes. Though several of the regeneration techniques cited have remained areas of purely academic research, some alternatives to thermal regeneration systems have been employed in industry. Current alternative regeneration methods are:
TSA (thermal swing adsorption) and/or PSA (pressure swing adsorption) processes: through convection (heat transfer) using steam, "hot" inert gas (typically heated nitrogen (150–250 °C (302–482 °F))), or vacuum (T+VSA or TVSA, combining TSA and VSA processes) in situ regeneration
MWR (microwave regeneration)
Chemical and solvent regeneration
Microbial regeneration
Electrochemical regeneration
Ultrasonic regeneration
Wet air oxidation
See also
Activated charcoal cleanse
Biochar
Bamboo charcoal
Binchōtan
Bone char
Carbon filtering
Carbocatalysis
Conjugated microporous polymer
Hydrogen storage
Kværner process
Onboard refueling vapor recovery
References
External links
"Imaging the atomic structure of activated carbon" – Journal of Physics: Condensed Matter
"How Does Activated Carbon Work?" at Slate
"Worshiping the False Idols of Wellness" on activated charcoal as a useless wellness practice at the New York Times
Allotropes of carbon
Filters
Toxicology treatments
Excipients
World Health Organization essential medicines
Gas technologies
Charcoal | Activated carbon | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 6,680 | [
"Toxicology",
"Allotropes of carbon",
"Allotropes",
"Chemical equipment",
"Filters",
"Toxicology treatments",
"Filtration"
] |
395,402 | https://en.wikipedia.org/wiki/Primase | DNA primase is an enzyme involved in the replication of DNA and is a type of RNA polymerase. Primase catalyzes the synthesis of a short RNA (or DNA in some
living organisms) segment called a primer complementary to a ssDNA (single-stranded DNA) template. After this elongation, the RNA piece is removed by a 5' to 3' exonuclease and refilled with DNA.
Function
In bacteria, primase binds to the DNA helicase forming a complex called the primosome. Primase is activated by the helicase where it then synthesizes a short RNA primer approximately 11 ±1 nucleotides long, to which new nucleotides can be added by DNA polymerase. Archaeal and eukaryote primases are heterodimeric proteins with one large regulatory and one minuscule catalytic subunit.
The RNA segments are first synthesized by primase and then elongated by DNA polymerase. Then the DNA polymerase forms a protein complex with two primase subunits to form the alpha DNA Polymerase primase complex. Primase is one of the most error prone and slow polymerases. Primases in organisms such as E. coli synthesize around 2000 to 3000 primers at the rate of one primer per second. Primase also acts as a halting mechanism to prevent the leading strand from outpacing the lagging strand by halting the progression of the replication fork. The rate determining step in primase is when the first phosphodiester bond is formed between two molecules of RNA.
The replication mechanisms differ between different bacteria and viruses where the primase covalently link to helicase in viruses such as the T7 bacteriophage. In viruses such as the herpes simplex virus (HSV-1), primase can form complexes with helicase. The primase-helicase complex is used to unwind dsDNA (double-stranded) and synthesizes the lagging strand using RNA primers The majority of primers synthesized by primase are two to three nucleotides long.
Types
There are two main types of primase: DnaG found in most bacteria, and the AEP (Archaeo-Eukaryote Primase) superfamily found in archaean and eukaryotic primases. While bacterial primases (DnaG-type) are composed of a single protein unit (a monomer) and synthesize RNA primers, AEP primases are usually composed of two different primase units (a heterodimer) and synthesize two-part primers with both RNA and DNA components. While functionally similar, the two primase superfamilies evolved independently of each other.
DnaG
The crystal structure of primase in E. coli with a core containing the DnaG protein was determined in the year 2000. The DnaG and primase complex is cashew shaped and contains three subdomains. The central subdomain forms a toprim fold which is made of a mixture five beta sheets and six alpha helices. The toprim fold is used for binding regulators and metals. The primase uses a phosphotransfer domain for the transfer coordination of metals, which makes it distinct from other polymerases. The side subunits contain a NH2 and COOH-terminal made of alpha helixes and beta sheets. The NH2 terminal interacts with a zinc binding domain and COOH-terminal region which interacts with DnaB-ID.
The Toprim fold is also found in topoisomerase and mitochrondrial Twinkle primase/helicase. Some DnaG-like (bacteria-like; ) primases have been found in archaeal genomes.
AEP
Eukaryote and archaeal primases tend to be more similar to each other, in terms of structure and mechanism, than they are to bacterial primases. The archaea-eukaryotic primase (AEP) superfamily, which most eukaryal and archaeal primase catalytic subunits belong to, has recently been redefined as a primase-polymerase family in recognition of the many other roles played by enzymes in this family. This classification also emphasizes the broad origins of AEP primases; the superfamily is now recognized as transitioning between RNA and DNA functions.
Archaeal and eukaryote primases are heterodimeric proteins with one large regulatory (human PRIM2, p58) and one small catalytic subunit (human PRIM1, p48/p49). The large subunit contains a N-terminal 4Fe–4S cluster, split out in some archaea as PriX/PriCT. The large subunit is implicated in improving the activity and specificity of the small subunit. For example, removing the part corresponding to the large subunit in a fusion protein PolpTN2 results in a slower enzyme with reverse transcriptase activity.
Multifunctional primases
The AEP family of primase-polymerases has diverse features beyond making only primers. In addition to priming DNA during replication, AEP enzymes may have additional functions in the DNA replication process, such as polymerization of DNA or RNA, terminal transfer, translesion synthesis (TLS), non-homologous end joining (NHEJ), and possibly in restarting stalled replication forks. Primases typically synthesize primers from ribonucleotides (NTPs); however, primases with polymerase capabilities also have an affinity for deoxyribonucleotides (dNTPs). Primases with terminal transferase functionality are capable of adding nucleotides to the 3’ end of a DNA strand independently of a template. Other enzymes involved in DNA replication, such as helicases, may also exhibit primase activity.
In eukaryotes and archaea
Human PrimPol (ccdc111) serves both primase and polymerase functions, like many archaeal primases; exhibits terminal transferase activity in the presence of manganese; and plays a significant role in translesion synthesis and in restarting stalled replication forks. PrimPol is actively recruited to damaged sites through its interaction with RPA, an adapter protein that facilitates DNA replication and repair. PrimPol has a zinc finger domain similar to that of some viral primases, which is essential for translesion synthesis and primase activity and may regulate primer length. Unlike most primases, PrimPol is uniquely capable of starting DNA chains with dNTPs.
PriS, the archaeal primase small subunit, has a role in translesion synthesis (TLS) and can bypass common DNA lesions. Most archaea lack the specialized polymerases that perform TLS in eukaryotes and bacteria. PriS alone preferentially synthesizes strings of DNA; but in combination with PriL, the large subunit, RNA polymerase activity is increased.
In Sulfolobus solfataricus, the primase heterodimer PriSL can act as a primase, polymerase, and terminal transferase. PriSL is thought to initiate primer synthesis with NTPs and then switch to dNTPs. The enzyme can polymerize RNA or DNA chains, with DNA products reaching as long as 7000 nucleotides (7 kb). It is suggested that this dual functionality may be a common feature of archaeal primases.
In bacteria
AEP multifunctional primases also appear in bacteria and phages that infect them. They can display novel domain organizations with domains that bring even more functions beyond polymerization.
Bacterial LigD () is primarily involved in the NHEJ pathway. It has an AEP superfamily polymerase/primase domain, a 3'-phosphoesterase domain, and a ligase domain. It is also capable of primase, DNA and RNA polymerase, and terminal transferase activity. DNA polymerization activity can produce chains over 7000 nucleotides (7 kb) in length, while RNA polymerization produces chains up to 1 kb long.
In viruses and plasmids
AEP enzymes are widespread, and can be found encoded in mobile genetic elements including virus/phages and plasmids. They either use them as a sole replication protein or in combination with other replication-associated proteins, such as helicases and, less frequently, DNA polymerases. Whereas the presence of AEP in eukaryotic and archaeal viruses is expected in that they mirror their hosts, bacterial viruses and plasmids also as frequently encode AEP-superfamily enzymes as they do DnaG-family primases. A great diversity of AEP families has been uncovered in various bacterial plasmids by comparative genomics surveys. Their evolutionary history is currently unknown, as these found in bacteria and bacteriophages appear too different from their archaeo-eukaryotic homologs for a recent horizontal gene transfer.
MCM-like helicase in Bacillus cereus strain ATCC 14579 (BcMCM; ) is an SF6 helicase fused with an AEP primase. The enzyme has both primase and polymerase functions in addition to helicase function. The gene coding for it is found in a prophage. It bears homology to ORF904 of plasmid pRN1 from Sulfolobus islandicus, which has an AEP PrimPol domain. Vaccinia virus D5 and HSV Primase are examples of AEP-helicase fusion as well.
PolpTN2 is an Archaeal primase found in the TN2 plasmid. A fusion of domains homologous to PriS and PriL, it exhibits both primase and DNA polymerase activity, as well as terminal transferase function. Unlike most primases, PolpTN2 forms primers composed exclusively of dNTPs. Unexpectedly, when the PriL-like domain was truncated, PolpTN2 could also synthesize DNA on the RNA template, i.e., acted as an RNA-dependent DNA polymerase (reverse transcriptase).
Even DnaG primases can have extra functions, if given the right domains. The T7 phage gp4 is a DnaG primase-helicase fusion, and performs both functions in replication.
References
External links
Overview article on primase structure and function (1995)
Proteopedia: Helicase-binding domain of Escherichia coli primase
Proteopedia: Complex between the DnaB helicase and the DnaG primase
EC 2.7.7
DNA replication | Primase | [
"Biology"
] | 2,216 | [
"Genetics techniques",
"DNA replication",
"Molecular genetics"
] |
395,585 | https://en.wikipedia.org/wiki/Floating%20raft%20system | Floating raft is a land-based building foundation that protects it against settlement and liquefaction of soft soil from seismic activity. It was a necessary innovation in the development of tall buildings in the wet soil of Chicago in the 19th century, when it was developed by John Wellborn Root who came up with the idea of interlacing the concrete slab with steel beams. The earliest precursor to the modern version may be the concrete rafts developed for the building of Millbank Prison in 1815 by Robert Smirke.
For a floating raft foundation – or simply "floating foundation" – the foundation has a volume such that, if that volume filled with soil, it would be equal in weight to the total weight of the structure.
When the soil is so soft that not even friction piles will support the building load, these types of foundation are the final option and makes the building behave like a boat: obeying Archimedes' principle, buoyed up by the weight of the earth displaced in creating the foundation.
Buildings with a floating raft system
Jin Mao Tower (1999) in Shanghai, China
Rönesans Rezidans (2012) in Antakya, Turkey
See also
Buoyancy
Building construction
Foundations (buildings and structures) | Floating raft system | [
"Engineering"
] | 249 | [
"Structural engineering",
"Foundations (buildings and structures)"
] |
395,744 | https://en.wikipedia.org/wiki/Small%20GTPase | Small GTPases (), also known as small G-proteins, are a family of hydrolase enzymes that can bind and hydrolyze guanosine triphosphate (GTP). They are a type of G-protein found in the cytosol that are homologous to the alpha subunit of heterotrimeric G-proteins, but unlike the alpha subunit of G proteins, a small GTPase can function independently as a hydrolase enzyme to bind to and hydrolyze a guanosine triphosphate (GTP) to form guanosine diphosphate (GDP). The best-known members are the Ras GTPases and hence they are sometimes called Ras subfamily GTPases.
A typical G-protein is active when bound to GTP and inactive when bound to GDP (i.e. when the GTP is hydrolyzed to GDP). The GDP can then be replaced by free GTP. Therefore, a G-protein can be switched on and off. GTP hydrolysis is accelerated by GTPase activating proteins (GAPs), while GTP exchange is catalyzed by guanine nucleotide exchange factors (GEFs). Activation of a GEF typically activates its cognate G-protein, while activation of a GAP results in inactivation of the cognate G-protein.
Guanosine nucleotide dissociation inhibitors (GDI) maintain small GTPases in the inactive state.
Small GTPases regulate a wide variety of processes in the cell, including growth, cellular differentiation, cell movement and lipid vesicle transport.
The Ras superfamily
There are more than a hundred proteins in the Ras superfamily. Based on structure, sequence and function, the Ras superfamily is divided into five main families, (Ras, Rho, Ran, Rab and Arf GTPases). The Ras family itself is further divided into 6 subfamilies: Ras, Ral, Rit, Rap, Rheb, and Rad. Miro is a recent contributor to the superfamily.
Each subfamily shares the common core G domain, which provides essential GTPase and nucleotide exchange activity.
The surrounding sequence helps determine the functional specificity of the small GTPase, for example the 'Insert Loop', common to the Rho subfamily, specifically contributes to binding to effector proteins such as IQGAP and WASP.
The Ras family is generally responsible for cell proliferation, Rho for cell morphology, Ran for nuclear transport and Rab and Arf for vesicle transport.
See also
GTP-binding protein regulators
References
External links
G proteins
Peripheral membrane proteins | Small GTPase | [
"Chemistry"
] | 550 | [
"G proteins",
"Signal transduction"
] |
395,877 | https://en.wikipedia.org/wiki/Histamine | Histamine is an organic nitrogenous compound involved in local immune responses communication, as well as regulating physiological functions in the gut and acting as a neurotransmitter for the brain, spinal cord, and uterus. Discovered in 1910, histamine has been considered a local hormone (autocoid) because it's produced without involvement of the classic endocrine glands; however, in recent years, histamine has been recognized as a central neurotransmitter. Histamine is involved in the inflammatory response and has a central role as a mediator of itching. As part of an immune response to foreign pathogens, histamine is produced by basophils and by mast cells found in nearby connective tissues. Histamine increases the permeability of the capillaries to white blood cells and some proteins, to allow them to engage pathogens in the infected tissues. It consists of an imidazole ring attached to an ethylamine chain; under physiological conditions, the amino group of the side-chain is protonated.
Properties
Histamine base, obtained as a mineral oil mull, melts at 83–84 °C. Hydrochloride and phosphorus salts form white hygroscopic crystals and are easily dissolved in water or ethanol, but not in ether. In aqueous solution, the imidazole ring of histamine exists in two tautomeric forms, identified by which of the two nitrogen atoms is protonated. The nitrogen farther away from the side chain is the 'tele' nitrogen and is denoted by a lowercase tau sign and the nitrogen closer to the side chain is the 'pros' nitrogen and is denoted by the pi sign. The tele tautomer, Nτ-H-histamine, is preferred in solution as compared to the pros tautomer, Nπ-H-histamine.
Histamine has two basic centres, namely the aliphatic amino group and whichever nitrogen atom of the imidazole ring does not already have a proton. Under physiological conditions, the aliphatic amino group (having a pKa around 9.4) will be protonated, whereas the second nitrogen of the imidazole ring (pKa ≈ 5.8) will not be protonated.
Thus, histamine is normally protonated to a singly charged cation. Since human blood is slightly basic (with a normal pH range of 7.35 to 7.45) therefore the predominant form of histamine present in human blood is monoprotic at the aliphatic nitrogen. Histamine is a monoamine neurotransmitter.
Synthesis and metabolism
Histamine is derived from the decarboxylation of the amino acid histidine, a reaction catalyzed by the enzyme -histidine decarboxylase. It is a hydrophilic vasoactive amine.
Once formed, histamine is either stored or rapidly inactivated by its primary degradative enzymes, histamine-N-methyltransferase or diamine oxidase. In the central nervous system, histamine released into the synapses is primarily broken down by histamine-N-methyltransferase, while in other tissues both enzymes may play a role. Several other enzymes, including MAO-B and ALDH2, further process the immediate metabolites of histamine for excretion or recycling.
Bacteria also are capable of producing histamine using histidine decarboxylase enzymes unrelated to those found in animals. A non-infectious form of foodborne disease, scombroid poisoning, is due to histamine production by bacteria in spoiled food, particularly fish. Fermented foods and beverages naturally contain small quantities of histamine due to a similar conversion performed by fermenting bacteria or yeasts. Sake contains histamine in the 20–40 mg/L range; wines contain it in the 2–10 mg/L range.
Storage and release
Most histamine in the body is generated in granules in mast cells and in white blood cells (leukocytes) called basophils. Mast cells are especially numerous at sites of potential injury – the nose, mouth, and feet, internal body surfaces, and blood vessels. Non-mast cell histamine is found in several tissues, including the hypothalamus region of the brain, where it functions as a neurotransmitter. Another important site of histamine storage and release is the enterochromaffin-like (ECL) cell of the stomach.
The most important pathophysiologic mechanism of mast cell and basophil histamine release is immunologic. These cells, if sensitized by IgE antibodies attached to their membranes, degranulate when exposed to the appropriate antigen. Certain amines and alkaloids, including such drugs as morphine, and curare alkaloids, can displace histamine in granules and cause its release. Antibiotics like polymyxin are also found to stimulate histamine release.
Histamine release occurs when allergens bind to mast-cell-bound IgE antibodies. Reduction of IgE overproduction may lower the likelihood of allergens finding sufficient free IgE to trigger a mast-cell-release of histamine.
Degradation
Histamine is released by mast cells as an immune response and is later degraded primarily by two enzymes: diamine oxidase (DAO), coded by AOC1 genes, and histamine-N-methyltransferase (HNMT), coded by the HNMT gene. The presence of single nucleotide polymorphisms (SNPs) at these genes are associated with a wide variety of disorders, from ulcerative colitis to autism spectrum disorder (ASD). Histamine degradation is crucial to the prevention of allergic reactions to otherwise harmless substances.
DAO is typically expressed in epithelial cells at the tip of the villus of the small intestine mucosa. Reduced DAO activity is associated with gastrointestinal disorders and widespread food intolerances. This is due to an increase in histamine absorption through enterocytes, which increases histamine concentration in the bloodstream. One study found that migraine patients with gluten sensitivity were positively correlated with having lower DAO serum levels. Low DAO activity can have more severe consequences as mutations in the ABP1 alleles of the AOC1 gene have been associated with ulcerative colitis. Heterozygous or homozygous recessive genotypes at the rs2052129, rs2268999, rs10156191 and rs1049742 alleles increased the risk for reduced DAO activity. People with genotypes for reduced DAO activity can avoid foods high in histamine, such as alcohol, fermented foods, and aged foods, to attenuate any allergic reactions. Additionally, they should be aware whether any probiotics they are taking contain any histamine-producing strains and consult with their doctor to receive proper support .
HNMT is expressed in the central nervous system, where deficiencies have been shown to lead to aggressive behavior and abnormal sleep-wake cycles in mice. Since brain histamine as a neurotransmitter regulates a number of neurophysiological functions, emphasis has been placed on the development of drugs to target histamine regulation. Yoshikawa et al. explores how the C314T, A939G, G179A, and T632C polymorphisms all impact HNMT enzymatic activity and the pathogenesis of various neurological disorders. These mutations can have either a positive or negative impact. Some patients with ADHD have been shown to exhibit exacerbated symptoms in response to food additives and preservatives, due in part to histamine release. In a double-blind placebo-controlled crossover trial, children with ADHD who responded with aggravated symptoms after consuming a challenge beverage were more likely to have HNMT polymorphisms at T939C and Thr105Ile. Histamine's role in neuroinflammation and cognition has made it a target of study for many neurological disorders, including autism spectrum disorder (ASD). De novo deletions in the HNMT gene have also been associated with ASD.
Mast cells serve an important immunological role by defending the body from antigens and maintaining homeostasis in the gut microbiome. They act as an alarm to trigger inflammatory responses by the immune system. Their presence in the digestive system enables them to serve as an early barrier to pathogens entering the body. People who suffer from widespread sensitivities and allergic reactions may have mast cell activation syndrome (MCAS), in which excessive amounts of histamine are released from mast cells, and cannot be properly degraded. The abnormal release of histamine can be caused by either dysfunctional internal signals from defective mast cells or by the development of clonal mast cell populations through mutations occurring in the tyrosine kinase Kit. In such cases, the body may not be able to produce sufficient degradative enzymes to properly eliminate the excess histamine. Since MCAS is symptomatically characterized as such a broad disorder, it is difficult to diagnose and can be mislabeled as a variety of diseases, including irritable bowel syndrome and fibromyalgia.
Histamine is often explored as a potential cause for diseases related to hyper-responsiveness of the immune system. In patients with asthma, abnormal histamine receptor activation in the lungs is associated with bronchospasm, airway obstruction, and production of excess mucus. Mutations in histamine degradation are more common in patients with a combination of asthma and allergen hypersensitivity than in those with just asthma. The HNMT-464 TT and HNMT-1639 TT polymorphisms are significantly more common among children with allergic asthma, the latter of which is overrepresented in African-American children.
Mechanism of action
In humans, histamine exerts its effects primarily by binding to G protein-coupled histamine receptors, designated H1 through H4. , histamine is believed to activate ligand-gated chloride channels in the brain and intestinal epithelium.
Roles in the body
Although histamine is small compared to other biological molecules (containing only 17 atoms), it plays an important role in the body. It is known to be involved in 23 different physiological functions. Histamine is known to be involved in many physiological functions because of its chemical properties that allow it to be versatile in binding. It is Coulombic (able to carry a charge), conformational, and flexible. This allows it to interact and bind more easily.
Vasodilation and fall in blood pressure
It has been known for more than one hundred years that an intravenous injection of histamine causes a fall in the blood pressure. The underlying mechanism concerns both vascular hyperpermeability and vasodilation. Histamine binding to endothelial cells causes them to contract, thus increasing vascular leak. It also stimulates synthesis and release of various vascular smooth muscle cell relaxants, such as nitric oxide, endothelium-derived hyperpolarizing factors and other compounds, resulting in blood vessel dilation. These two mechanisms play a key role in the pathophysiology of anaphylaxis.
Effects on nasal mucous membrane
Increased vascular permeability causes fluid to escape from capillaries into the tissues, which leads to the classic symptoms of an allergic reaction: a runny nose and watery eyes. Allergens can bind to IgE-loaded mast cells in the nasal cavity's mucous membranes. This can lead to three clinical responses:
sneezing due to histamine-associated sensory neural stimulation
hyper-secretion from glandular tissue
nasal congestion due to vascular engorgement associated with vasodilation and increased capillary permeability
Sleep-wake regulation
Histamine is a neurotransmitter that is released from histaminergic neurons which project out of the mammalian hypothalamus. The cell bodies of these neurons are located in a portion of the posterior hypothalamus known as the tuberomammillary nucleus (TMN). The histamine neurons in this region comprise the brain's histamine system, which projects widely throughout the brain and includes axonal projections to the cortex, medial forebrain bundle, other hypothalamic nuclei, medial septum, the nucleus of the diagonal band, ventral tegmental area, amygdala, striatum, substantia nigra, hippocampus, thalamus and elsewhere. The histamine neurons in the TMN are involved in regulating the sleep-wake cycle and promote arousal when activated. The neural firing rate of histamine neurons in the TMN is strongly positively correlated with an individual's state of arousal. These neurons fire rapidly during periods of wakefulness, fire more slowly during periods of relaxation/tiredness, and stop firing altogether during REM and NREM (non-REM) sleep.
First-generation H1 antihistamines (i.e., antagonists of histamine receptor H1) are capable of crossing the blood–brain barrier and produce drowsiness by antagonizing histamine H1 receptors in the tuberomammillary nucleus. The newer class of second-generation H1 antihistamines do not readily permeate the blood–brain barrier and thus are less likely to cause sedation, although individual reactions, concomitant medications and dosage may increase the likelihood of a sedating effect. In contrast, histamine H3 receptor antagonists increase wakefulness. Similar to the sedative effect of first-generation H1 antihistamines, an inability to maintain vigilance can occur from the inhibition of histamine biosynthesis or the loss (i.e., degeneration or destruction) of histamine-releasing neurons in the TMN.
Gastric acid release
Enterochromaffin-like cells in the stomach release histamine, stimulating parietal cells via H2 receptors. This triggers carbon dioxide and water uptake from the blood, converted to carbonic acid by carbonic anhydrase. The acid dissociates into hydrogen and bicarbonate ions within the parietal cell. Bicarbonate returns to the bloodstream, while hydrogen is pumped into the stomach lumen. Histamine release ceases as stomach pH decreases. Antagonist molecules, such as ranitidine or famotidine, block the H2 receptor and prevent histamine from binding, causing decreased hydrogen ion secretion.
Protective effects
While histamine has stimulatory effects upon neurons, it also has suppressive ones that protect against the susceptibility to convulsion, drug sensitization, denervation supersensitivity, ischemic lesions and stress. It has also been suggested that histamine controls the mechanisms by which memories and learning are forgotten.
Erection and sexual function
Loss of libido and erectile dysfunction can occur during treatment with histamine H2 receptor antagonists such as cimetidine, ranitidine, and risperidone. The injection of histamine into the corpus cavernosum in males with psychogenic impotence produces full or partial erections in 74% of them. It has been suggested that H2 antagonists may cause sexual dysfunction by reducing the functional binding of testosterone to its androgen receptors.
Schizophrenia
Metabolites of histamine are increased in the cerebrospinal fluid of people with schizophrenia, while the efficiency of H1 receptor binding sites is decreased. Many atypical antipsychotic medications have the effect of increasing histamine production, because histamine levels seem to be imbalanced in people with that disorder.
Multiple sclerosis
Histamine therapy for treatment of multiple sclerosis is currently being studied. The different H receptors have been known to have different effects on the treatment of this disease. The H1 and H4 receptors, in one study, have been shown to be counterproductive in the treatment of MS. The H1 and H4 receptors are thought to increase permeability in the blood-brain barrier, thus increasing infiltration of unwanted cells in the central nervous system. This can cause inflammation, and MS symptom worsening. The H2 and H3 receptors are thought to be helpful when treating MS patients. Histamine has been shown to help with T-cell differentiation. This is important because in MS, the body's immune system attacks its own myelin sheaths on nerve cells (which causes loss of signaling function and eventual nerve degeneration). By helping T cells to differentiate, the T cells will be less likely to attack the body's own cells, and instead, attack invaders.
Disorders
As an integral part of the immune system, histamine may be involved in immune system disorders and allergies. Mastocytosis is a rare disease in which there is a proliferation of mast cells that produce excess histamine.
Histamine intolerance is a presumed set of adverse reactions (such as flush, itching, rhinitis, etc.) to ingested histamine in food. The mainstream theory accepts that there may exist adverse reactions to ingested histamine, but does not recognize histamine intolerance as a separate condition that can be diagnosed.
The role of histamine in health and disease is an area of ongoing research. For example, histamine is researched in its potential link with migraine episodes, when there is a noted elevation in the plasma concentrations of both histamine and calcitonin gene-related peptide (CGRP). These two substances are potent vasodilators, and have been demonstrated to mutually stimulate each other's release within the trigeminovascular system, a mechanism that could potentially instigate the onset of migraines. In patients with a deficiency in histamine degradation due to variants in the AOC1 gene that encodes diamine oxidase enzyme, a diet high in histamine has been observed to trigger migraines, that suggests a potential functional relationship between exogenous histamine and CGRP, which could be instrumental in understanding the genesis of diet-induced migraines, so that the role of histamine, particularly in relation to CGRP, is a promising area of research for elucidating the mechanisms underlying migraine development and aggravation, especially relevant in the context of dietary triggers and genetic predispositions related to histamine metabolism.
Measurement
Histamine, a biogenic amine, involves many physiological functions, including the immune response, gastric acid secretion, and neuromodulation. However, its rapid metabolism makes it challenging to measure histamine levels directly in plasma.
As a solution for the rapid metabolism of histamine, the measurement of histamine and its metabolites, particularly the 1,4-methyl-imidazolacetic acid, in a 24-hour urine sample, provides an efficient alternative to histamine measurement because the values of these metabolites remain elevated for a much longer period than the histamine itself.
Commercial laboratories provide a 24-hour urine sample test for 1,4-methyl-imidazolacetic acid, the metabolite of histamine. This test is a valuable tool in assessing the metabolism of histamine in the body, as direct measurement of histamine in the serum has low diagnostic value due to the specificities of histamine metabolism.
The urine test involves collecting all urine produced in a 24-hour period, which is then analyzed for the presence of 1,4-methyl-imidazolacetic acid. This comprehensive approach ensures a more accurate reflection of histamine metabolism over an extended period; as such, the 1,4-methyl-imidazolacetic acid urine test offered by commercial labs is currently the most reliable method to determine the rate of histamine metabolism, which may be helpful for the health care practitioners to assess individual’s health status, such as to diagnose interstitial cystitis.
History
The properties of histamine, then called β-imidazolylethylamine, were first described in 1910 by the British scientists Henry H. Dale and P.P. Laidlaw. By 1913 the name histamine was in use, using combining forms of histo- + amine, yielding "tissue amine".
"H substance" or "substance H" are occasionally used in medical literature for histamine or a hypothetical histamine-like diffusible substance released in allergic reactions of skin and in the responses of tissue to inflammation.
See also
Anaphylaxis
Diamine oxidase
Histamine N-methyltransferase
Hay fever (allergic rhinitis)
Histamine intolerance
Histamine receptor antagonist
Scombroid food poisoning
Photic sneeze reflex
References
External links
Histamine MS Spectrum
Histamine bound to proteins in the PDB
Biogenic amines
Amines
Imidazoles
Immune system
Vasodilators
Immunostimulants
Neurotransmitters
TAAR1 agonists
Carbonic anhydrase activators | Histamine | [
"Chemistry",
"Biology"
] | 4,511 | [
"Biomolecules by chemical classification",
"Biogenic amines",
"Immune system",
"Neurotransmitters",
"Organ systems",
"Neurochemistry"
] |
396,022 | https://en.wikipedia.org/wiki/Euler%20equations%20%28fluid%20dynamics%29 | In fluid dynamics, the Euler equations are a set of partial differential equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. In particular, they correspond to the Navier–Stokes equations with zero viscosity and zero thermal conductivity.
The Euler equations can be applied to incompressible and compressible flows. The incompressible Euler equations consist of Cauchy equations for conservation of mass and balance of momentum, together with the incompressibility condition that the flow velocity is divergence-free. The compressible Euler equations consist of equations for conservation of mass, balance of momentum, and balance of energy, together with a suitable constitutive equation for the specific energy density of the fluid. Historically, only the equations of conservation of mass and balance of momentum were derived by Euler. However, fluid dynamics literature often refers to the full set of the compressible Euler equations – including the energy equation – as "the compressible Euler equations".
The mathematical characters of the incompressible and compressible Euler equations are rather different. For constant fluid density, the incompressible equations can be written as a quasilinear advection equation for the fluid velocity together with an elliptic Poisson's equation for the pressure. On the other hand, the compressible Euler equations form a quasilinear hyperbolic system of conservation equations.
The Euler equations can be formulated in a "convective form" (also called the "Lagrangian form") or a "conservation form" (also called the "Eulerian form"). The convective form emphasizes changes to the state in a frame of reference moving with the fluid. The conservation form emphasizes the mathematical interpretation of the equations as conservation equations for a control volume fixed in space (which is useful
from a numerical point of view).
History
The Euler equations first appeared in published form in Euler's article "Principes généraux du mouvement des fluides", published in Mémoires de l'Académie des Sciences de Berlin in 1757 (although Euler had previously presented his work to the Berlin Academy in 1752). Prior work included contributions from the Bernoulli family as well as from Jean le Rond d'Alembert.
The Euler equations were among the first partial differential equations to be written down, after the wave equation. In Euler's original work, the system of equations consisted of the momentum and continuity equations, and thus was underdetermined except in the case of an incompressible flow. An additional equation, which was called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816.
During the second half of the 19th century, it was found that the equation related to the balance of energy must at all times be kept for compressible flows, and the adiabatic condition is a consequence of the fundamental laws in the case of smooth solutions. With the discovery of the special theory of relativity, the concepts of energy density, momentum density, and stress were unified into the concept of the stress–energy tensor, and energy and momentum were likewise unified into a single concept, the energy–momentum vector.
Incompressible Euler equations with constant and uniform density
In convective form (i.e., the form with the convective operator made explicit in the momentum equation), the incompressible Euler equations in case of density constant in time and uniform in space are:
where:
is the flow velocity vector, with components in an N-dimensional space ,
, for a generic function (or field) denotes its material derivative in time with respect to the advective field and
is the gradient of the specific (with the sense of per unit mass) thermodynamic work, the internal source term, and
is the flow velocity divergence.
represents body accelerations (per unit mass) acting on the continuum, for example gravity, inertial accelerations, electric field acceleration, and so on.
The first equation is the Euler momentum equation with uniform density (for this equation it could also not be constant in time). By expanding the material derivative, the equations become:
In fact for a flow with uniform density the following identity holds:
where is the mechanic pressure. The second equation is the incompressible constraint, stating the flow velocity is a solenoidal field (the order of the equations is not causal, but underlines the fact that the incompressible constraint is not a degenerate form of the continuity equation, but rather of the energy equation, as it will become clear in the following). Notably, the continuity equation would be required also in this incompressible case as an additional third equation in case of density varying in time or varying in space. For example, with density nonuniform in space but constant in time, the continuity equation to be added to the above set would correspond to:
So the case of constant and uniform density is the only one not requiring the continuity equation as additional equation regardless of the presence or absence of the incompressible constraint. In fact, the case of incompressible Euler equations with constant and uniform density discussed here is a toy model featuring only two simplified equations, so it is ideal for didactical purposes even if with limited physical relevance.
The equations above thus represent respectively conservation of mass (1 scalar equation) and momentum (1 vector equation containing scalar components, where is the physical dimension of the space of interest). Flow velocity and pressure are the so-called physical variables.
In a coordinate system given by the velocity and external force vectors and have components and , respectively. Then the equations may be expressed in subscript notation as:
where the and subscripts label the N-dimensional space components, and is the Kroenecker delta. The use of Einstein notation (where the sum is implied by repeated indices instead of sigma notation) is also frequent.
Properties
Although Euler first presented these equations in 1755, many fundamental questions or concepts about them remain unanswered.
In three space dimensions, in certain simplified scenarios, the Euler equations produce singularities.
Smooth solutions of the free (in the sense of without source term: g=0) equations satisfy the conservation of specific kinetic energy:
In the one-dimensional case without the source term (both pressure gradient and external force), the momentum equation becomes the inviscid Burgers' equation:
This model equation gives many insights into Euler equations.
Nondimensionalisation
In order to make the equations dimensionless, a characteristic length , and a characteristic velocity , need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained:
and of the field unit vector:
Substitution of these inversed relations in Euler equations, defining the Froude number, yields (omitting the * at apix):
Euler equations in the Froude limit (no external field) are named free equations and are conservative. The limit of high Froude numbers (low external field) is thus notable and can be studied with perturbation theory.
Conservation form
The conservation form emphasizes the mathematical properties of Euler equations, and especially the contracted form is often the most convenient one for computational fluid dynamics simulations. Computationally, there are some advantages in using the conserved variables. This gives rise to a large class of numerical methods
called conservative methods.
The free Euler equations are conservative, in the sense they are equivalent to a conservation equation:
or simply in Einstein notation:
where the conservation quantity in this case is a vector, and is a flux matrix. This can be simply proved.
At last Euler equations can be recast into the particular equation:
Spatial dimensions
For certain problems, especially when used to analyze compressible flow in a duct or in case the flow is cylindrically or spherically symmetric, the one-dimensional Euler equations are a useful first approximation. Generally, the Euler equations are solved by Riemann's method of characteristics. This involves finding curves in plane of independent variables (i.e., and ) along which partial differential equations (PDEs) degenerate into ordinary differential equations (ODEs). Numerical solutions of the Euler equations rely heavily on the method of characteristics.
Incompressible Euler equations
In convective form the incompressible Euler equations in case of density variable in space are:
where the additional variables are:
is the fluid mass density,
is the pressure, .
The first equation, which is the new one, is the incompressible continuity equation. In fact the general continuity equation would be:
but here the last term is identically zero for the incompressibility constraint.
Conservation form
The incompressible Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively:
Here has length and has size .
In general (not only in the Froude limit) Euler equations are expressible as:
Conservation variables
The variables for the equations in conservation form are not yet optimised. In fact we could define:
where is the momentum density, a conservation variable.
where is the force density, a conservation variable.
Euler equations
In differential convective form, the compressible (and most general) Euler equations can be written shortly with the material derivative notation:
where the additional variables here is:
is the specific internal energy (internal energy per unit mass).
The equations above thus represent conservation of mass, momentum, and energy: the energy equation expressed in the variable internal energy allows to understand the link with the incompressible case, but it is not in the simplest form.
Mass density, flow velocity and pressure are the so-called convective variables (or physical variables, or lagrangian variables), while mass density, momentum density and total energy density are the so-called conserved variables (also called eulerian, or mathematical variables).
If one expands the material derivative the equations above are:
Incompressible constraint (revisited)
Coming back to the incompressible case, it now becomes apparent that the incompressible constraint typical of the former cases actually is a particular form valid for incompressible flows of the energy equation, and not of the mass equation. In particular, the incompressible constraint corresponds to the following very simple energy equation:
Thus for an incompressible inviscid fluid the specific internal energy is constant along the flow lines, also in a time-dependent flow. The pressure in an incompressible flow acts like a Lagrange multiplier, being the multiplier of the incompressible constraint in the energy equation, and consequently in incompressible flows it has no thermodynamic meaning. In fact, thermodynamics is typical of compressible flows and degenerates in incompressible flows.
Basing on the mass conservation equation, one can put this equation in the conservation form:
meaning that for an incompressible inviscid nonconductive flow a continuity equation holds for the internal energy.
Enthalpy conservation
Since by definition the specific enthalpy is:
The material derivative of the specific internal energy can be expressed as:
Then by substituting the momentum equation in this expression, one obtains:
And by substituting the latter in the energy equation, one obtains that the enthalpy expression for the Euler energy equation:
In a reference frame moving with an inviscid and nonconductive flow, the variation of enthalpy directly corresponds to a variation of pressure.
Thermodynamics of ideal fluids
In thermodynamics the independent variables are the specific volume, and the specific entropy, while the specific energy is a function of state of these two variables.
For a thermodynamic fluid, the compressible Euler equations are consequently best written as:
where:
is the specific volume
is the flow velocity vector
is the specific entropy
In the general case and not only in the incompressible case, the energy equation means that for an inviscid thermodynamic fluid the specific entropy is constant along the flow lines, also in a time-dependent flow. Basing on the mass conservation equation, one can put this equation in the conservation form:
meaning that for an inviscid nonconductive flow a continuity equation holds for the entropy.
On the other hand, the two second-order partial derivatives of the specific internal energy in the momentum equation require the specification of the fundamental equation of state of the material considered, i.e. of the specific internal energy as function of the two variables specific volume and specific entropy:
The fundamental equation of state contains all the thermodynamic information about the system (Callen, 1985), exactly like the couple of a thermal equation of state together with a caloric equation of state.
Conservation form
The Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively:
where:
is the momentum density, a conservation variable.
is the total energy density (total energy per unit volume).
Here has length N + 2 and has size N(N + 2). In general (not only in the Froude limit) Euler equations are expressible as:
where is the force density, a conservation variable.
We remark that also the Euler equation even when conservative (no external field, Froude limit) have no Riemann invariants in general. Some further assumptions are required
However, we already mentioned that for a thermodynamic fluid the equation for the total energy density is equivalent to the conservation equation:
Then the conservation equations in the case of a thermodynamic fluid are more simply expressed as:
where is the entropy density, a thermodynamic conservation variable.
Another possible form for the energy equation, being particularly useful for isobarics, is:
where is the total enthalpy density.
Quasilinear form and characteristic equations
Expanding the fluxes can be an important part of constructing numerical solvers, for example by exploiting (approximate) solutions to the Riemann problem. In regions where the state vector y varies smoothly, the equations in conservative form can be put in quasilinear form:
where are called the flux Jacobians defined as the matrices:
This Jacobian does not exist where the state variables are discontinuous, as at contact discontinuities or shocks.
Characteristic equations
The compressible Euler equations can be decoupled into a set of N+2 wave equations that describes sound in Eulerian continuum if they are expressed in characteristic variables instead of conserved variables.
In fact the tensor A is always diagonalizable. If the eigenvalues (the case of Euler equations) are all real the system is defined hyperbolic, and physically eigenvalues represent the speeds of propagation of information. If they are all distinguished, the system is defined strictly hyperbolic (it will be proved to be the case of one-dimensional Euler equations). Furthermore, diagonalisation of compressible Euler equation is easier when the energy equation is expressed in the variable entropy (i.e. with equations for thermodynamic fluids) than in other energy variables. This will become clear by considering the 1D case.
If is the right eigenvector of the matrix corresponding to the eigenvalue , by building the projection matrix:
One can finally find the characteristic variables as:
Since A is constant, multiplying the original 1-D equation in flux-Jacobian form with P−1 yields the characteristic equations:
The original equations have been decoupled into N+2 characteristic equations each describing a simple wave, with the eigenvalues being the wave speeds. The variables wi are called the characteristic variables and are a subset of the conservative variables. The solution of the initial value problem in terms of characteristic variables is finally very simple. In one spatial dimension it is:
Then the solution in terms of the original conservative variables is obtained by transforming back:
this computation can be explicited as the linear combination of the eigenvectors:
Now it becomes apparent that the characteristic variables act as weights in the linear combination of the jacobian eigenvectors. The solution can be seen as superposition of waves, each of which is advected independently without change in shape. Each i-th wave has shape wipi and speed of propagation λi. In the following we show a very simple example of this solution procedure.
Waves in 1D inviscid, nonconductive thermodynamic fluid
If one considers Euler equations for a thermodynamic fluid with the two further assumptions of one spatial dimension and free (no external field: g = 0):
If one defines the vector of variables:
recalling that is the specific volume, the flow speed, the specific entropy, the corresponding jacobian matrix is:
At first one must find the eigenvalues of this matrix by solving the characteristic equation:
that is explicitly:
This determinant is very simple: the fastest computation starts on the last row, since it has the highest number of zero elements.
Now by computing the determinant 2×2:
by defining the parameter:
or equivalently in mechanical variables, as:
This parameter is always real according to the second law of thermodynamics. In fact the second law of thermodynamics can be expressed by several postulates. The most elementary of them in mathematical terms is the statement of convexity of the fundamental equation of state, i.e. the hessian matrix of the specific energy expressed as function of specific volume and specific entropy:
is defined positive. This statement corresponds to the two conditions:
The first condition is the one ensuring the parameter a is defined real.
The characteristic equation finally results:
That has three real solutions:
Then the matrix has three real eigenvalues all distinguished: the 1D Euler equations are a strictly hyperbolic system.
At this point one should determine the three eigenvectors: each one is obtained by substituting one eigenvalue in the eigenvalue equation and then solving it. By substituting the first eigenvalue λ1 one obtains:
Basing on the third equation that simply has solution s1=0, the system reduces to:
The two equations are redundant as usual, then the eigenvector is defined with a multiplying constant. We choose as right eigenvector:
The other two eigenvectors can be found with analogous procedure as:
Then the projection matrix can be built:
Finally it becomes apparent that the real parameter a previously defined is the speed of propagation of the information characteristic of the hyperbolic system made of Euler equations, i.e. it is the wave speed. It remains to be shown that the sound speed corresponds to the particular case of an isentropic transformation:
Compressibility and sound speed
Sound speed is defined as the wavespeed of an isentropic transformation:
by the definition of the isoentropic compressibility:
the soundspeed results always the square root of ratio between the isentropic compressibility and the density:
Ideal gas
The sound speed in an ideal gas depends only on its temperature:
Since the specific enthalpy in an ideal gas is proportional to its temperature:
the sound speed in an ideal gas can also be made dependent only on its specific enthalpy:
Bernoulli's theorem for steady inviscid flow
Bernoulli's theorem is a direct consequence of the Euler equations.
Incompressible case and Lamb's form
The vector calculus identity of the cross product of a curl holds:
where the Feynman subscript notation is used, which means the subscripted gradient operates only on the factor .
Lamb in his famous classical book Hydrodynamics (1895), still in print, used this identity to change the convective term of the flow velocity in rotational form:
the Euler momentum equation in Lamb's form becomes:
Now, basing on the other identity:
the Euler momentum equation assumes a form that is optimal to demonstrate Bernoulli's theorem for steady flows:
In fact, in case of an external conservative field, by defining its potential φ:
In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes:
And by projecting the momentum equation on the flow direction, i.e. along a streamline, the cross product disappears because its result is always perpendicular to the velocity:
In the steady incompressible case the mass equation is simply:
that is the mass conservation for a steady incompressible flow states that the density along a streamline is constant. Then the Euler momentum equation in the steady incompressible case becomes:
The convenience of defining the total head for an inviscid liquid flow is now apparent:
which may be simply written as:
That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant.
Compressible case
In the most general steady (compressible) case the mass equation in conservation form is:
Therefore, the previous expression is rather
The right-hand side appears on the energy equation in convective form, which on the steady state reads:
The energy equation therefore becomes:
so that the internal specific energy now features in the head.
Since the external field potential is usually small compared to the other terms, it is convenient to group the latter ones in the total enthalpy:
and the Bernoulli invariant for an inviscid gas flow is:
which can be written as:
That is, the energy balance for a steady inviscid flow in an external conservative field states that the sum of the total enthalpy and the external potential is constant along a streamline.
In the usual case of small potential field, simply:
Friedmann form and Crocco form
By substituting the pressure gradient with the entropy and enthalpy gradient, according to the first law of thermodynamics in the enthalpy form:
in the convective form of Euler momentum equation, one arrives to:
Friedmann deduced this equation for the particular case of a perfect gas and published it in 1922. However, this equation is general for an inviscid nonconductive fluid and no equation of state is implicit in it.
On the other hand, by substituting the enthalpy form of the first law of thermodynamics in the rotational form of Euler momentum equation, one obtains:
and by defining the specific total enthalpy:
one arrives to the Crocco–Vazsonyi form (Crocco, 1937) of the Euler momentum equation:
In the steady case the two variables entropy and total enthalpy are particularly useful since Euler equations can be recast into the Crocco's form:
Finally if the flow is also isothermal:
by defining the specific total Gibbs free energy:
the Crocco's form can be reduced to:
From these relationships one deduces that the specific total free energy is uniform in a steady, irrotational, isothermal, isoentropic, inviscid flow.
Discontinuities
The Euler equations are quasilinear hyperbolic equations and their general solutions are waves. Under certain assumptions they can be simplified leading to Burgers equation. Much like the familiar oceanic waves, waves described by the Euler Equations 'break' and so-called shock waves are formed; this is a nonlinear effect and represents the solution becoming multi-valued. Physically this represents a breakdown of the assumptions that led to the formulation of the differential equations, and to extract further information from the equations we must go back to the more fundamental integral form. Then, weak solutions are formulated by working in 'jumps' (discontinuities) into the flow quantities – density, velocity, pressure, entropy – using the Rankine–Hugoniot equations. Physical quantities are rarely discontinuous; in real flows, these discontinuities are smoothed out by viscosity and by heat transfer. (See Navier–Stokes equations)
Shock propagation is studied – among many other fields – in aerodynamics and rocket propulsion, where sufficiently fast flows occur.
To properly compute the continuum quantities in discontinuous zones (for example shock waves or boundary layers) from the local forms (all the above forms are local forms, since the variables being described are typical of one point in the space considered, i.e. they are local variables) of Euler equations through finite difference methods generally too many space points and time steps would be necessary for the memory of computers now and in the near future. In these cases it is mandatory to avoid the local forms of the conservation equations, passing some weak forms, like the finite volume one.
Rankine–Hugoniot equations
Starting from the simplest case, one consider a steady free conservation equation in conservation form in the space domain:
where in general F is the flux matrix. By integrating this local equation over a fixed volume Vm, it becomes:
Then, basing on the divergence theorem, we can transform this integral in a boundary integral of the flux:
This global form simply states that there is no net flux of a conserved quantity passing through a region in the case steady and without source. In 1D the volume reduces to an interval, its boundary being its extrema, then the divergence theorem reduces to the fundamental theorem of calculus:
that is the simple finite difference equation, known as the jump relation:
That can be made explicit as:
where the notation employed is:
Or, if one performs an indefinite integral:
On the other hand, a transient conservation equation:
brings to a jump relation:
For one-dimensional Euler equations the conservation variables and the flux are the vectors:
where:
is the specific volume,
is the mass flux.
In the one dimensional case the correspondent jump relations, called the Rankine–Hugoniot equations, are:<
In the steady one dimensional case the become simply:
Thanks to the mass difference equation, the energy difference equation can be simplified without any restriction:
where is the specific total enthalpy.
These are the usually expressed in the convective variables:
where:
is the flow speed
is the specific internal energy.
The energy equation is an integral form of the Bernoulli equation in the compressible case.
The former mass and momentum equations by substitution lead to the Rayleigh equation:
Since the second term is a constant, the Rayleigh equation always describes a simple line in the pressure volume plane not dependent of any equation of state, i.e. the Rayleigh line. By substitution in the Rankine–Hugoniot equations, that can be also made explicit as:
One can also obtain the kinetic equation and to the Hugoniot equation. The analytical passages are not shown here for brevity.
These are respectively:
The Hugoniot equation, coupled with the fundamental equation of state of the material:
describes in general in the pressure volume plane a curve passing by the conditions (v0, p0), i.e. the Hugoniot curve, whose shape strongly depends on the type of material considered.
It is also customary to define a Hugoniot function:
allowing to quantify deviations from the Hugoniot equation, similarly to the previous definition of the hydraulic head, useful for the deviations from the Bernoulli equation.
Finite volume form
On the other hand, by integrating a generic conservation equation:
on a fixed volume Vm, and then basing on the divergence theorem, it becomes:
By integrating this equation also over a time interval:
Now by defining the node conserved quantity:
we deduce the finite volume form:
In particular, for Euler equations, once the conserved quantities have been determined, the convective variables are deduced by back substitution:
Then the explicit finite volume expressions of the original convective variables are:
Constraints
It has been shown that Euler equations are not a complete set of equations, but they require some additional constraints to admit a unique solution: these are the equation of state of the material considered. To be consistent with thermodynamics these equations of state should satisfy the two laws of thermodynamics. On the other hand, by definition non-equilibrium system are described by laws lying outside these laws. In the following we list some very simple equations of state and the corresponding influence on Euler equations.
Ideal polytropic gas
For an ideal polytropic gas the fundamental equation of state is:
where is the specific energy, is the specific volume, is the specific entropy, is the molecular mass, here is considered a constant (polytropic process), and can be shown to correspond to the heat capacity ratio. This equation can be shown to be consistent with the usual equations of state employed by thermodynamics.
From this equation one can derive the equation for pressure by its thermodynamic definition:
By inverting it one arrives to the mechanical equation of state:
Then for an ideal gas the compressible Euler equations can be simply expressed in the mechanical or primitive variables specific volume, flow velocity and pressure, by taking the set of the equations for a thermodynamic system and modifying the energy equation into a pressure equation through this mechanical equation of state. At last, in convective form they result:
and in one-dimensional quasilinear form they results:
where the conservative vector variable is:
and the corresponding jacobian matrix is:
Steady flow in material coordinates
In the case of steady flow, it is convenient to choose the Frenet–Serret frame along a streamline as the coordinate system for describing the steady momentum Euler equation:
where , and denote the flow velocity, the pressure and the density, respectively.
Let be a Frenet–Serret orthonormal basis which consists of a tangential unit vector, a normal unit vector, and a binormal unit vector to the streamline, respectively. Since a streamline is a curve that is tangent to the velocity vector of the flow, the left-hand side of the above equation, the convective derivative of velocity, can be described as follows:
where
and is the radius of curvature of the streamline.
Therefore, the momentum part of the Euler equations for a steady flow is found to have a simple form:
For barotropic flow , Bernoulli's equation is derived from the first equation:
The second equation expresses that, in the case the streamline is curved, there should exist a pressure gradient normal to the streamline because the centripetal acceleration of the fluid parcel is only generated by the normal pressure gradient.
The third equation expresses that pressure is constant along the binormal axis.
Streamline curvature theorem
Let be the distance from the center of curvature of the streamline, then the second equation is written as follows:
where
This equation states:In a steady flow of an inviscid fluid without external forces, the center of curvature of the streamline lies in the direction of decreasing radial pressure.
Although this relationship between the pressure field and flow curvature is very useful, it doesn't have a name in the English-language scientific literature. Japanese fluid-dynamicists call the relationship the "Streamline curvature theorem".
This "theorem" explains clearly why there are such low pressures in the centre of vortices, which consist of concentric circles of streamlines.
This also is a way to intuitively explain why airfoils generate lift forces.
Exact solutions
All potential flow solutions are also solutions of the Euler equations, and in particular the incompressible Euler equations when the potential is harmonic.
Solutions to the Euler equations with vorticity are:
parallel shear flows – where the flow is unidirectional, and the flow velocity only varies in the cross-flow directions, e.g. in a Cartesian coordinate system the flow is for instance in the -direction – with the only non-zero velocity component being only dependent on and and not on
Arnold–Beltrami–Childress flow – an exact solution of the incompressible Euler equations.
Two solutions of the three-dimensional Euler equations with cylindrical symmetry have been presented by Gibbon, Moore and Stuart in 2003. These two solutions have infinite energy; they blow up everywhere in space in finite time.
See also
Bernoulli's theorem
Kelvin's circulation theorem
Cauchy equations
Froude number
Madelung equations
Navier–Stokes equations
Burgers equation
Jeans equations
Perfect fluid
D'Alembert's paradox
References
Notes
Citations
Sources
Further reading
Eponymous equations of physics
Equations of fluid dynamics
Leonhard Euler
Functions of space and time | Euler equations (fluid dynamics) | [
"Physics",
"Chemistry"
] | 6,757 | [
"Equations of fluid dynamics",
"Equations of physics",
"Functions of space and time",
"Eponymous equations of physics",
"Spacetime",
"Fluid dynamics"
] |
396,286 | https://en.wikipedia.org/wiki/Ginzburg%E2%80%93Landau%20theory | In physics, Ginzburg–Landau theory, often called Landau–Ginzburg theory, named after Vitaly Ginzburg and Lev Landau, is a mathematical physical theory used to describe superconductivity. In its initial form, it was postulated as a phenomenological model which could describe type-I superconductors without examining their microscopic properties. One GL-type superconductor is the famous YBCO, and generally all cuprates.
Later, a version of Ginzburg–Landau theory was derived from the Bardeen–Cooper–Schrieffer microscopic theory by Lev Gor'kov, thus showing that it also appears in some limit of microscopic theory and giving microscopic interpretation of all its parameters. The theory can also be given a general geometric setting, placing it in the context of Riemannian geometry, where in many cases exact solutions can be given. This general setting then extends to quantum field theory and string theory, again owing to its solvability, and its close relation to other, similar systems.
Introduction
Based on Landau's previously established theory of second-order phase transitions, Ginzburg and Landau argued that the free energy density of a superconductor near the superconducting transition can be expressed in terms of a complex order parameter field , where the quantity is a measure of the local density of superconducting electrons analogous to a quantum mechanical wave function. While is nonzero below a phase transition into a superconducting state, no direct interpretation of this parameter was given in the original paper. Assuming smallness of and smallness of its gradients, the free energy density has the form of a field theory and exhibits U(1) gauge symmetry:
where
is the free energy density of the normal phase,
and are phenomenological parameters that are functions of T (and often written just and ).
is an effective mass,
is an effective charge (usually , where e is the charge of an electron),
is the magnetic vector potential, and
is the magnetic field.
The total free energy is given by . By minimizing with respect to variations in the order parameter and the vector potential , one arrives at the Ginzburg–Landau equations
where denotes the dissipation-free electric current density and Re the real part. The first equation — which bears some similarities to the time-independent Schrödinger equation, but is principally different due to a nonlinear term — determines the order parameter, . The second equation then provides the superconducting current.
Simple interpretation
Consider a homogeneous superconductor where there is no superconducting current and the equation for ψ simplifies to:
This equation has a trivial solution: . This corresponds to the normal conducting state, that is for temperatures above the superconducting transition temperature, .
Below the superconducting transition temperature, the above equation is expected to have a non-trivial solution (that is ). Under this assumption the equation above can be rearranged into:
When the right hand side of this equation is positive, there is a nonzero solution for (remember that the magnitude of a complex number can be positive or zero). This can be achieved by assuming the following temperature dependence of
with :
Above the superconducting transition temperature, T > Tc, the expression (T) / is positive and the right hand side of the equation above is negative. The magnitude of a complex number must be a non-negative number, so only solves the Ginzburg–Landau equation.
Below the superconducting transition temperature, T < Tc, the right hand side of the equation above is positive and there is a non-trivial solution for . Furthermore, that is approaches zero as T gets closer to Tc from below. Such a behavior is typical for a second order phase transition.
In Ginzburg–Landau theory the electrons that contribute to superconductivity were proposed to form a superfluid. In this interpretation, ||2 indicates the fraction of electrons that have condensed into a superfluid.
Coherence length and penetration depth
The Ginzburg–Landau equations predicted two new characteristic lengths in a superconductor. The first characteristic length was termed coherence length, ξ. For T > Tc (normal phase), it is given by
while for T < Tc (superconducting phase), where it is more relevant, it is given by
It sets the exponential law according to which small perturbations of density of superconducting electrons recover their equilibrium value ψ0. Thus this theory characterized all superconductors by two length scales. The second one is the penetration depth, λ. It was previously introduced by the London brothers in their London theory. Expressed in terms of the parameters of Ginzburg–Landau model it is
where ψ0 is the equilibrium value of the order parameter in the absence of an electromagnetic field. The penetration depth sets the exponential law according to which an external magnetic field decays inside the superconductor.
The original idea on the parameter κ belongs to Landau. The ratio κ = λ/ξ is presently known as the Ginzburg–Landau parameter. It has been proposed by Landau that Type I superconductors are those with 0 < κ < 1/, and Type II superconductors those with κ > 1/.
Fluctuations
The phase transition from the normal state is of second order for Type II superconductors, taking into account fluctuations, as demonstrated by Dasgupta and Halperin, while for Type I superconductors it is of first order, as demonstrated by Halperin, Lubensky and Ma.
Classification of superconductors
In the original paper Ginzburg and Landau observed the existence of two types of superconductors depending
on the energy of the interface between the normal and superconducting states. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.
The most important finding from Ginzburg–Landau theory was made by Alexei Abrikosov in 1957. He used Ginzburg–Landau theory to explain experiments on superconducting alloys and thin films. He found that in a type-II superconductor in a high magnetic field, the field penetrates in a triangular lattice of quantized tubes of flux vortices.
Geometric formulation
The Ginzburg–Landau functional can be formulated in the general setting of a complex vector bundle over a compact Riemannian manifold. This is the same functional as given above, transposed to the notation commonly used in Riemannian geometry. In multiple interesting cases, it can be shown to exhibit the same phenomena as the above, including Abrikosov vortices (see discussion below).
For a complex vector bundle over a Riemannian manifold with fiber , the order parameter is understood as a section of the vector bundle . The Ginzburg–Landau functional is then a Lagrangian for that section:
The notation used here is as follows. The fibers are assumed to be equipped with a Hermitian inner product so that the square of the norm is written as . The phenomenological parameters and have been absorbed so that the potential energy term is a quartic mexican hat potential; i.e., exhibiting spontaneous symmetry breaking, with a minimum at some real value . The integral is explicitly over the volume form
for an -dimensional manifold with determinant of the metric tensor .
The is the connection one-form and is the corresponding curvature 2-form (this is not the same as the free energy given up top; here, corresponds to the electromagnetic field strength tensor). The corresponds to the vector potential, but is in general non-Abelian when , and is normalized differently. In physics, one conventionally writes the connection as for the electric charge and vector potential ; in Riemannian geometry, it is more convenient to drop the (and all other physical units) and take to be a one-form taking values in the Lie algebra corresponding to the symmetry group of the fiber. Here, the symmetry group is SU(n), as that leaves the inner product invariant; so here, is a form taking values in the algebra .
The curvature generalizes the electromagnetic field strength to the non-Abelian setting, as the curvature form of an affine connection on a vector bundle . It is conventionally written as
That is, each is an skew-symmetric matrix. (See the article on the metric connection for additional articulation of this specific notation.) To emphasize this, note that the first term of the Ginzburg–Landau functional, involving the field-strength only, is
which is just the Yang–Mills action on a compact Riemannian manifold.
The Euler–Lagrange equations for the Ginzburg–Landau functional are the Yang–Mills equations
and
where is the adjoint of , analogous to the codifferential . Note that these are closely related to the Yang–Mills–Higgs equations.
Specific results
In string theory, it is conventional to study the Ginzburg–Landau functional for the manifold being a Riemann surface, and taking ; i.e., a line bundle. The phenomenon of Abrikosov vortices persists in these general cases, including , where one can specify any finite set of points where vanishes, including multiplicity. The proof generalizes to arbitrary Riemann surfaces and to Kähler manifolds. In the limit of weak coupling, it can be shown that converges uniformly to 1, while and converge uniformly to zero, and the curvature becomes a sum over delta-function distributions at the vortices. The sum over vortices, with multiplicity, just equals the degree of the line bundle; as a result, one may write a line bundle on a Riemann surface as a flat bundle, with N singular points and a covariantly constant section.
When the manifold is four-dimensional, possessing a spinc structure, then one may write a very similar functional, the Seiberg–Witten functional, which may be analyzed in a similar fashion, and which possesses many similar properties, including self-duality. When such systems are integrable, they are studied as Hitchin systems.
Self-duality
When the manifold is a Riemann surface , the functional can be re-written so as to explicitly show self-duality. One achieves this by writing the exterior derivative as a sum of Dolbeault operators . Likewise, the space of one-forms over a Riemann surface decomposes into a space that is holomorphic, and one that is anti-holomorphic: , so that forms in are holomorphic in and have no dependence on ; and vice-versa for . This allows the vector potential to be written as and likewise with and .
For the case of , where the fiber is so that the bundle is a line bundle, the field strength can similarly be written as
Note that in the sign-convention being used here, both and are purely imaginary (viz U(1) is generated by so derivatives are purely imaginary). The functional then becomes
The integral is understood to be over the volume form
,
so that
is the total area of the surface . The is the Hodge star, as before. The degree of the line bundle over the surface is
where is the first Chern class.
The Lagrangian is minimized (stationary) when solve the Ginzberg–Landau equations
Note that these are both first-order differential equations, manifestly self-dual. Integrating the second of these, one quickly finds that a non-trivial solution must obey
.
Roughly speaking, this can be interpreted as an upper limit to the density of the Abrikosov vortecies. One can also show that the solutions are bounded; one must have .
In string theory
In particle physics, any quantum field theory with a unique classical vacuum state and a potential energy with a degenerate critical point is called a Landau–Ginzburg theory. The generalization to N = (2,2) supersymmetric theories in 2 spacetime dimensions was proposed by Cumrun Vafa and Nicholas Warner in November 1988; in this generalization one imposes that the superpotential possess a degenerate critical point. The same month, together with Brian Greene they argued that these theories are related by a renormalization group flow to sigma models on Calabi–Yau manifolds. In his 1993 paper "Phases of N = 2 theories in two-dimensions", Edward Witten argued that Landau–Ginzburg theories and sigma models on Calabi–Yau manifolds are different phases of the same theory. A construction of such a duality was given by relating the Gromov–Witten theory of Calabi–Yau orbifolds to FJRW theory an analogous Landau–Ginzburg "FJRW" theory. Witten's sigma models were later used to describe the low energy dynamics of 4-dimensional gauge theories with monopoles as well as brane constructions.
See also
Flux pinning
Gross–Pitaevskii equation
Landau theory
Stuart–Landau equation
Reaction–diffusion systems
Quantum vortex
Higgs bundle
Bogomol'nyi–Prasad–Sommerfield bound
References
Papers
V.L. Ginzburg and L.D. Landau, Zh. Eksp. Teor. Fiz. 20, 1064 (1950). English translation in: L. D. Landau, Collected papers (Oxford: Pergamon Press, 1965) p. 546
A.A. Abrikosov, Zh. Eksp. Teor. Fiz. 32, 1442 (1957) (English translation: Sov. Phys. JETP 5 1174 (1957)].) Abrikosov's original paper on vortex structure of Type-II superconductors derived as a solution of G–L equations for κ > 1/√2
L.P. Gor'kov, Sov. Phys. JETP 36, 1364 (1959)
A.A. Abrikosov's 2003 Nobel lecture: pdf file or video
V.L. Ginzburg's 2003 Nobel Lecture: pdf file or video
Superconductivity
Quantum field theory
Lev Landau | Ginzburg–Landau theory | [
"Physics",
"Materials_science",
"Engineering"
] | 3,176 | [
"Quantum field theory",
"Physical quantities",
"Superconductivity",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
396,320 | https://en.wikipedia.org/wiki/Matrix%20mechanics | Matrix mechanics is a formulation of quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925. It was the first conceptually autonomous and logically consistent formulation of quantum mechanics. Its account of quantum jumps supplanted the Bohr model's electron orbits. It did so by interpreting the physical properties of particles as matrices that evolve in time. It is equivalent to the Schrödinger wave formulation of quantum mechanics, as manifest in Dirac's bra–ket notation.
In some contrast to the wave formulation, it produces spectra of (mostly energy) operators by purely algebraic, ladder operator methods. Relying on these methods, Wolfgang Pauli derived the hydrogen atom spectrum in 1926, before the development of wave mechanics.
Development of matrix mechanics
In 1925, Werner Heisenberg, Max Born, and Pascual Jordan formulated the matrix mechanics representation of quantum mechanics.
Epiphany at Helgoland
In 1925 Werner Heisenberg was working in Göttingen on the problem of calculating the spectral lines of hydrogen. By May 1925 he began trying to describe atomic systems by observables only. On June 7, after weeks of failing to alleviate his hay fever with aspirin and cocaine, Heisenberg left for the pollen-free North Sea island of Helgoland. While there, in between climbing and memorizing poems from Goethe's West-östlicher Diwan, he continued to ponder the spectral issue and eventually realised that adopting non-commuting observables might solve the problem. He later wrote:
It was about three o' clock at night when the final result of the calculation lay before me. At first I was deeply shaken. I was so excited that I could not think of sleep. So I left the house and awaited the sunrise on the top of a rock.
The three fundamental papers
After Heisenberg returned to Göttingen, he showed Wolfgang Pauli his calculations, commenting at one point:
Everything is still vague and unclear to me, but it seems as if the electrons will no more move on orbits.
On July 9 Heisenberg gave the same paper of his calculations to Max Born, saying that "he had written a crazy paper and did not dare to send it in for publication, and that Born should read it and advise him" prior to publication. Heisenberg then departed for a while, leaving Born to analyse the paper.
In the paper, Heisenberg formulated quantum theory without sharp electron orbits. Hendrik Kramers had earlier calculated the relative intensities of spectral lines in the Sommerfeld model by interpreting the Fourier coefficients of the orbits as intensities. But his answer, like all other calculations in the old quantum theory, was only correct for large orbits.
Heisenberg, after a collaboration with Kramers, began to understand that the transition probabilities were not quite classical quantities, because the only frequencies that appear in the Fourier series should be the ones that are observed in quantum jumps, not the fictional ones that come from Fourier-analyzing sharp classical orbits. He replaced the classical Fourier series with a matrix of coefficients, a fuzzed-out quantum analog of the Fourier series. Classically, the Fourier coefficients give the intensity of the emitted radiation, so in quantum mechanics the magnitude of the matrix elements of the position operator were the intensity of radiation in the bright-line spectrum. The quantities in Heisenberg's formulation were the classical position and momentum, but now they were no longer sharply defined. Each quantity was represented by a collection of Fourier coefficients with two indices, corresponding to the initial and final states.
When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices, which he had learned from his study under Jakob Rosanes at Breslau University. Born, with the help of his assistant and former student Pascual Jordan, began immediately to make the transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper.
A follow-on paper was submitted for publication before the end of the year by all three authors. (A brief review of Born's role in the development of the matrix mechanics formulation of quantum mechanics along with a discussion of the key formula involving the non-commutativity of the probability amplitudes can be found in an article by Jeremy Bernstein. A detailed historical and technical account can be found in Mehra and Rechenberg's book The Historical Development of Quantum Theory. Volume 3. The Formulation of Matrix Mechanics and Its Modifications 1925–1926.)
Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912 and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics.
Born, however, had learned matrix algebra from Rosanes, as already noted, but Born had also learned Hilbert's theory of integral equations and quadratic forms for an infinite number of variables as was apparent from a citation by Born of Hilbert's work Grundzüge einer allgemeinen Theorie der Linearen Integralgleichungen published in 1912.
Jordan, too, was well equipped for the task. For a number of years, he had been an assistant to Richard Courant at Göttingen in the preparation of Courant and David Hilbert's book Methoden der mathematischen Physik I, which was published in 1924. This book, fortuitously, contained a great many of the mathematical tools necessary for the continued development of quantum mechanics.
In 1926, John von Neumann became assistant to David Hilbert, and he would coin the term Hilbert space to describe the algebra and analysis which were used in the development of quantum mechanics.
A linchpin contribution to this formulation was achieved in Dirac's reinterpretation/synthesis paper of 1925, which invented the language and framework usually employed today, in full display of the noncommutative structure of the entire construction.
Heisenberg's reasoning
Before matrix mechanics, the old quantum theory described the motion of a particle by a classical orbit, with well defined position and momentum , , with the restriction that the time integral over one period of the momentum times the velocity must be a positive integer multiple of the Planck constant
While this restriction correctly selects orbits with more or less the
right energy values , the old quantum mechanical formalism did not describe time dependent processes, such as the emission or absorption of radiation.
When a classical particle is weakly coupled to a radiation field, so that the radiative damping can be neglected, it will emit radiation in a pattern that repeats itself every orbital period. The frequencies that make up the outgoing wave are then integer multiples of the orbital frequency, and this is a reflection of the fact that is periodic, so that its Fourier representation has frequencies only.
The coefficients are complex numbers. The ones with negative frequencies must be the complex conjugates of the ones with positive frequencies, so that will always be real,
A quantum mechanical particle, on the other hand, cannot emit radiation continuously; it can only emit photons. Assuming that the quantum particle started in orbit number , emitted a photon, then ended up in orbit number , the energy of the photon is , which means that its frequency is .
For large and , but with relatively small, these are the classical frequencies by Bohr's correspondence principle
In the formula above, is the classical period of either orbit or orbit , since the difference between them is higher order in . But for small and , or if is large, the frequencies are not integer multiples of any single frequency.
Since the frequencies that the particle emits are the same as the frequencies in the Fourier description of its motion, this suggests that something in the time-dependent description of the particle is oscillating with frequency . Heisenberg called this quantity ,
and demanded that it should reduce to the classical Fourier coefficients in the classical limit. For large values of and but with relatively small,
is the th Fourier coefficient of the classical motion at orbit . Since Xnm has opposite frequency to , the condition that is real becomes
By definition, only has the frequency , so its time evolution is simple:
This is the original form of Heisenberg's equation of motion.
Given two arrays and describing two physical quantities, Heisenberg could form a new array of the same type by combining the terms , which also oscillate with the right frequency. Since the Fourier coefficients of the product of two quantities is the convolution of the Fourier coefficients of each one separately, the correspondence with Fourier series allowed Heisenberg to deduce the rule by which the arrays should be multiplied,
Born pointed out that this is the law of matrix multiplication, so that the position, the momentum, the energy, all the observable quantities in the theory, are interpreted as matrices. Under this multiplication rule, the product depends on the order: is different from .
The matrix is a complete description of the motion of a quantum mechanical particle. Because the frequencies in the quantum motion are not multiples of a common frequency, the matrix elements cannot be interpreted as the Fourier coefficients of a sharp classical trajectory. Nevertheless, as matrices, and satisfy the classical equations of motion; also see Ehrenfest's theorem, below.
Matrix basics
When it was introduced by Werner Heisenberg, Max Born and Pascual Jordan in 1925, matrix mechanics was not immediately accepted and was a source of controversy, at first. Schrödinger's later introduction of wave mechanics was greatly favored.
Part of the reason was that Heisenberg's formulation was in an odd mathematical language, for the time, while Schrödinger's formulation was based on familiar wave equations. But there was also a deeper sociological reason. Quantum mechanics had been developing by two paths, one led by Einstein, who emphasized the wave–particle duality he proposed for photons, and the other led by Bohr, that emphasized the discrete energy states and quantum jumps that Bohr discovered. De Broglie had reproduced the discrete energy states within Einstein's framework – the quantum condition is the standing wave condition, and this gave hope to those in the Einstein school that all the discrete aspects of quantum mechanics would be subsumed into a continuous wave mechanics.
Matrix mechanics, on the other hand, came from the Bohr school, which was concerned with discrete energy states and quantum jumps. Bohr's followers did not appreciate physical models that pictured electrons as waves, or as anything at all. They preferred to focus on the quantities that were directly connected to experiments.
In atomic physics, spectroscopy gave observational data on atomic transitions arising from the interactions of atoms with light quanta. The Bohr school required that only those quantities that were in principle measurable by spectroscopy should appear in the theory. These quantities include the energy levels and their intensities but they do not include the exact location of a particle in its Bohr orbit. It is very hard to imagine an experiment that could determine whether an electron in the ground state of a hydrogen atom is to the right or to the left of the nucleus. It was a deep conviction that such questions did not have an answer.
The matrix formulation was built on the premise that all physical observables are represented by matrices, whose elements are indexed by two different energy levels. The set of eigenvalues of the matrix were eventually understood to be the set of all possible values that the observable can have. Since Heisenberg's matrices are Hermitian, the eigenvalues are real.
If an observable is measured and the result is a certain eigenvalue, the corresponding eigenvector is the state of the system immediately after the measurement. The act of measurement in matrix mechanics collapses the state of the system. If one measures two observables simultaneously, the state of the system collapses to a common eigenvector of the two observables. Since most matrices don't have any eigenvectors in common, most observables can never be measured precisely at the same time. This is the uncertainty principle.
If two matrices share their eigenvectors, they can be simultaneously diagonalized. In the basis where they are both diagonal, it is clear that their product does not depend on their order because multiplication of diagonal matrices is just multiplication of numbers. The uncertainty principle, by contrast, is an expression of the fact that often two matrices and do not always commute, i.e., that does not necessarily equal 0. The fundamental commutation relation of matrix mechanics,
implies then that there are no states that simultaneously have a definite position and momentum.
This principle of uncertainty holds for many other pairs of observables as well. For example, the energy does not commute with the position either, so it is impossible to precisely determine the position and energy of an electron in an atom.
Nobel Prize
In 1928, Albert Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics. The announcement of the Nobel Prize in Physics for 1932 was delayed until November 1933. It was at that time that it was announced Heisenberg had won the Prize for 1932 "for the creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen" and Erwin Schrödinger and Paul Adrien Maurice Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory".
It might well be asked why Born was not awarded the Prize in 1932, along with Heisenberg, and Bernstein proffers speculations on this matter. One of them relates to Jordan joining the Nazi Party on May 1, 1933, and becoming a stormtrooper. Jordan's Party affiliations and Jordan's links to Born may well have affected Born's chance at the Prize at that time. Bernstein further notes that when Born finally won the Prize in 1954, Jordan was still alive, while the Prize was awarded for the statistical interpretation of quantum mechanics, attributable to Born alone.
Heisenberg's reactions to Born for Heisenberg receiving the Prize for 1932 and for Born receiving the Prize in 1954 are also instructive in evaluating whether Born should have shared the Prize with Heisenberg. On November 25, 1933, Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration – you, Jordan and I". Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside".
In 1954, Heisenberg wrote an article honoring Max Planck for his insight in 1900. In the article, Heisenberg credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye".
Mathematical development
Once Heisenberg introduced the matrices for and , he could find their matrix elements in special cases by guesswork, guided by the correspondence principle. Since the matrix elements are the quantum mechanical analogs of Fourier coefficients of the classical orbits, the simplest case is the harmonic oscillator, where the classical position and momentum, and , are sinusoidal.
Harmonic oscillator
In units where the mass and frequency of the oscillator are equal to one (see nondimensionalization), the energy of the oscillator is
The level sets of are the clockwise orbits, and they are nested circles in phase space. The classical orbit with energy is
The old quantum condition dictates that the integral of over an orbit, which is the area of the circle in phase space, must be an integer multiple of the Planck constant. The area of the circle of radius is . So
or, in natural units where , the energy is an integer.
The Fourier components of and are simple, and more so if they are combined into the quantities
Both and have only a single frequency, and and can be recovered from their sum and difference.
Since has a classical Fourier series with only the lowest frequency, and the matrix element is the th Fourier coefficient of the classical orbit, the matrix for is nonzero only on the line just above the diagonal, where it is equal to . The matrix for is likewise only nonzero on the line below the diagonal, with the same elements. Thus, from and , reconstruction yields
and
which, up to the choice of units, are the Heisenberg matrices for the harmonic oscillator. Both matrices are Hermitian, since they are constructed from the Fourier coefficients of real quantities.
Finding and is direct, since they are quantum Fourier coefficients so they evolve simply with time,
The matrix product of and is not hermitian, but has a real and imaginary part. The real part is one half the symmetric expression , while the imaginary part is proportional to the commutator
It is simple to verify explicitly that in the case of the harmonic oscillator, is , multiplied by the identity.
It is likewise simple to verify that the matrix
is a diagonal matrix, with eigenvalues .
Conservation of energy
The harmonic oscillator is an important case. Finding the matrices is easier than determining the general conditions from these special forms. For this reason, Heisenberg investigated the anharmonic oscillator, with Hamiltonian
In this case, the and matrices are no longer simple off-diagonal matrices, since the corresponding classical orbits are slightly squashed and displaced, so that they have Fourier coefficients at every classical frequency. To determine the matrix elements, Heisenberg required that the classical equations of motion be obeyed as matrix equations,
He noticed that if this could be done, then , considered as a matrix function of and , will have zero time derivative.
where is the anticommutator,
Given that all the off diagonal elements have a nonzero frequency; being constant implies that is diagonal.
It was clear to Heisenberg that in this system, the energy could be exactly conserved in an arbitrary quantum system, a very encouraging sign.
The process of emission and absorption of photons seemed to demand that the conservation of energy will hold at best on average. If a wave containing exactly one photon passes over some atoms, and one of them absorbs it, that atom needs to tell the others that they can't absorb the photon anymore. But if the atoms are far apart, any signal cannot reach the other atoms in time, and they might end up absorbing the same photon anyway and dissipating the energy to the environment. When the signal reached them, the other atoms would have to somehow recall that energy. This paradox led Bohr, Kramers and Slater to abandon exact conservation of energy. Heisenberg's formalism, when extended to include the electromagnetic field, was obviously going to sidestep this problem, a hint that the interpretation of the theory will involve wavefunction collapse.
Differentiation trick — canonical commutation relations
Demanding that the classical equations of motion are preserved is not a strong enough condition to determine the matrix elements. The Planck constant does not appear in the classical equations, so that the matrices could be constructed for many different values of and still satisfy the equations of motion, but with different energy levels.
So, in order to implement his program, Heisenberg needed to use the old quantum condition to fix the energy levels, then fill in the matrices with Fourier coefficients of the classical equations, then alter the matrix coefficients and the energy levels slightly to make sure the classical equations are satisfied. This is clearly not satisfactory. The old quantum conditions refer to the area enclosed by the sharp classical orbits, which do not exist in the new formalism.
The most important thing that Heisenberg discovered is how to translate the old quantum condition into a simple statement in matrix mechanics.
To do this, he investigated the action integral as a matrix quantity,
There are several problems with this integral, all stemming from the incompatibility of the matrix formalism with the old picture of orbits. Which period should be used? Semiclassically, it should be either or , but the difference is order , and an answer to order is sought. The quantum condition tells us that is on the diagonal, so the fact that is classically constant tells us that the off-diagonal elements are zero.
His crucial insight was to differentiate the quantum condition with respect to . This idea only makes complete sense in the classical limit, where is not an integer but the continuous action variable , but Heisenberg performed analogous manipulations with matrices, where the intermediate expressions are sometimes discrete differences and sometimes derivatives.
In the following discussion, for the sake of clarity, the differentiation will be performed on the classical variables, and the transition to matrix mechanics will be done afterwards, guided by the correspondence principle.
In the classical setting, the derivative is the derivative with respect to of the integral which defines , so it is tautologically equal to 1.
where the derivatives and should be interpreted as differences with respect to at corresponding times on nearby orbits, exactly what would be obtained if the Fourier coefficients of the orbital motion were differentiated. (These derivatives are symplectically orthogonal in phase space to the time derivatives and ).
The final expression is clarified by introducing the variable canonically conjugate to , which is called the angle variable : The derivative with respect to time is a derivative with respect to , up to a factor of ,
So the quantum condition integral is the average value over one cycle of the Poisson bracket of and .
An analogous differentiation of the Fourier series of demonstrates that the off-diagonal elements of the Poisson bracket are all zero. The Poisson bracket of two canonically conjugate variables, such as and , is the constant value 1, so this integral really is the average value of 1; so it is 1, as we knew all along, because it is after all. But Heisenberg, Born and Jordan, unlike Dirac, were not familiar with the theory of Poisson brackets, so, for them, the differentiation effectively evaluated in coordinates.
The Poisson Bracket, unlike the action integral, does have a simple translation to matrix mechanics – it normally corresponds to the imaginary part of the product of two variables, the commutator.
To see this, examine the (antisymmetrized) product of two matrices and in the correspondence limit, where the matrix elements are slowly varying functions of the index, keeping in mind that the answer is zero classically.
In the correspondence limit, when indices , are large and nearby, while , are small, the rate of change of the matrix elements in the diagonal direction is the matrix element of the derivative of the corresponding classical quantity. So it is possible to shift any matrix element diagonally through the correspondence,
where the right hand side is really only the th Fourier component of at the orbit near to this semiclassical order, not a full well-defined matrix.
The semiclassical time derivative of a matrix element is obtained up to a factor of by multiplying by the distance from the diagonal,
since the coefficient is semiclassically the th Fourier coefficient of the th classical orbit.
The imaginary part of the product of A and B can be evaluated by shifting the matrix elements around so as to reproduce the classical answer, which is zero.
The leading nonzero residual is then given entirely by the shifting. Since all the matrix elements are at indices which have a small distance from the large index position , it helps to introduce two temporary notations:
for the matrices, and for the th Fourier components of classical quantities,
Flipping the summation variable in the first sum from to , the matrix element becomes,
and it is clear that the principal (classical) part cancels.
The leading quantum part, neglecting the higher order product of derivatives in the residual expression, is then equal to
so that, finally,
which can be identified with times the th classical Fourier component of the Poisson bracket.
Heisenberg's original differentiation trick was eventually extended to a full semiclassical derivation of the quantum condition, in collaboration with Born and Jordan.
Once they were able to establish that
this condition replaced and extended the old quantization rule, allowing the matrix elements of and for an arbitrary system to be determined simply from the form of the Hamiltonian.
The new quantization rule was assumed to be universally true, even though the derivation from the old quantum theory required semiclassical reasoning.
(A full quantum treatment, however, for more elaborate arguments of the brackets, was appreciated in the 1940s to amount to extending Poisson brackets to Moyal brackets.)
State vectors and the Heisenberg equation
To make the transition to standard quantum mechanics, the most important further addition was the quantum state vector, now written ,
which is the vector that the matrices act on. Without the state vector, it is not clear which particular motion the Heisenberg matrices are describing, since they include all the motions somewhere.
The interpretation of the state vector, whose components are written , was furnished by Born. This interpretation is statistical: the result of a measurement of the physical quantity corresponding to the matrix is random, with an average value equal to
Alternatively, and equivalently, the state vector gives the probability amplitude for the quantum system to be in the energy state .
Once the state vector was introduced, matrix mechanics could be rotated to any basis, where the matrix need no longer be diagonal. The Heisenberg equation of motion in its original form states that evolves in time like a Fourier component,
which can be recast in differential form
and it can be restated so that it is true in an arbitrary basis, by noting that the matrix is diagonal with diagonal values ,
This is now a matrix equation, so it holds in any basis. This is the modern form of the Heisenberg equation of motion.
Its formal solution is:
All these forms of the equation of motion above say the same thing, that is equivalent to , through a basis rotation by the unitary matrix , a systematic picture elucidated by Dirac in his bra–ket notation.
Conversely, by rotating the basis for the state vector at each time by , the time dependence in the matrices can be undone. The matrices are now time independent, but the state vector rotates,
This is the Schrödinger equation for the state vector, and this time-dependent change of basis amounts to transformation to the Schrödinger picture, with .
In quantum mechanics in the Heisenberg picture the state vector, does not change with time, while an observable satisfies the Heisenberg equation of motion,
The extra term is for operators such as
which have an explicit time dependence, in addition to the time dependence from the unitary evolution discussed.
The Heisenberg picture does not distinguish time from space, so it is better suited to relativistic theories than the Schrödinger equation. Moreover, the similarity to classical physics is more manifest: the Hamiltonian equations of motion for classical mechanics are recovered by replacing the commutator above by the Poisson bracket (see also below). By the Stone–von Neumann theorem, the Heisenberg picture and the Schrödinger picture must be unitarily equivalent, as detailed below.
Further results
Matrix mechanics rapidly developed into modern quantum mechanics, and gave interesting physical results on the spectra of atoms.
Wave mechanics
Jordan noted that the commutation relations ensure that acts as a differential operator.
The operator identity
allows the evaluation of the commutator of with any power of , and it implies that
which, together with linearity, implies that a P-commutator effectively differentiates any analytic matrix function of .
Assuming limits are defined sensibly, this extends to arbitrary functions−but the extension need not be made explicit until a certain degree of mathematical rigor is required,
Since is a Hermitian matrix, it should be diagonalizable, and it will be clear from the eventual form of that every real number can be an eigenvalue. This makes some of the mathematics subtle, since there is a separate eigenvector for every point in space.
In the basis where is diagonal, an arbitrary state can be written as a superposition of states with eigenvalues ,
so that , and the operator multiplies each eigenvector by ,
Define a linear operator which differentiates ,
and note that
so that the operator obeys the same commutation relation as . Thus, the difference between and must commute with ,
so it may be simultaneously diagonalized with : its value acting on any eigenstate of is some function of the eigenvalue .
This function must be real, because both and are Hermitian,
rotating each state by a phase , that is, redefining the phase of the wavefunction:
The operator is redefined by an amount:
which means that, in the rotated basis, is equal to .
Hence, there is always a basis for the eigenvalues of where the action of on any wavefunction is known:
and the Hamiltonian in this basis is a linear differential operator on the state-vector components,
Thus, the equation of motion for the state vector is but a celebrated differential equation,
Since is a differential operator, in order for it to be sensibly defined, there must be eigenvalues of which neighbors every given value. This suggests that the only possibility is that the space of all eigenvalues of is all real numbers, and that is , up to a phase rotation.
To make this rigorous requires a sensible discussion of the limiting space of functions, and in this space this is the Stone–von Neumann theorem: any operators and which obey the commutation relations can be made to act on a space of wavefunctions, with a derivative operator. This implies that a Schrödinger picture is always available.
Matrix mechanics easily extends to many degrees of freedom in a natural way. Each degree of freedom has a separate operator and a separate effective differential operator , and the wavefunction is a function of all the possible eigenvalues of the independent commuting variables.
In particular, this means that a system of interacting particles in 3 dimensions is described by one vector whose components in a basis where all the are diagonal is a mathematical function of -dimensional space describing all their possible positions, effectively a much bigger collection of values than the mere collection of three-dimensional wavefunctions in one physical space. Schrödinger came to the same conclusion independently, and eventually proved the equivalence of his own formalism to Heisenberg's.
Since the wavefunction is a property of the whole system, not of any one part, the description in quantum mechanics is not entirely local. The description of several quantum particles has them correlated, or entangled. This entanglement leads to strange correlations between distant particles which violate the classical Bell's inequality.
Even if the particles can only be in just two positions, the wavefunction for particles requires complex numbers, one for each total configuration of positions. This is exponentially many numbers in , so simulating quantum mechanics on a computer requires exponential resources. Conversely, this suggests that it might be possible to find quantum systems of size which physically compute the answers to problems which classically require bits to solve. This is the aspiration behind quantum computing.
Ehrenfest theorem
For the time-independent operators and , so the Heisenberg equation above reduces to:
where the square brackets denote the commutator. For a Hamiltonian which is , the and operators satisfy:
where the first is classically the velocity, and second is classically the force, or potential gradient. These reproduce Hamilton's form of Newton's laws of motion. In the Heisenberg picture, the and operators satisfy the classical equations of motion. You can take the expectation value of both sides of the equation to see that, in any state :
So Newton's laws are exactly obeyed by the expected values of the operators in any given state. This is Ehrenfest's theorem, which is an obvious corollary of the Heisenberg equations of motion, but is less trivial in the Schrödinger picture, where Ehrenfest discovered it.
Transformation theory
In classical mechanics, a canonical transformation of phase space coordinates is one which preserves the structure of the Poisson brackets. The new variables , have the same Poisson brackets with each other as the original variables , . Time evolution is a canonical transformation, since the phase space at any time is just as good a choice of variables as the phase space at any other time.
The Hamiltonian flow is the canonical transformation:
Since the Hamiltonian can be an arbitrary function of and , there are such infinitesimal canonical transformations corresponding to every classical quantity , where serves as the Hamiltonian to generate a flow of points in phase space for an increment of time ,
For a general function on phase space, its infinitesimal change at every step under this map is
The quantity is called the infinitesimal generator of the canonical transformation.
In quantum mechanics, the quantum analog is now a Hermitian matrix, and the equations of motion are given by commutators,
The infinitesimal canonical motions can be formally integrated, just as the Heisenberg equation of motion were integrated,
where and is an arbitrary parameter.
The definition of a quantum canonical transformation is thus an arbitrary unitary change of basis on the space of all state vectors. is an arbitrary unitary matrix, a complex rotation in phase space,
These transformations leave the sum of the absolute square of the wavefunction components invariant, while they take states which are multiples of each other (including states which are imaginary multiples of each other) to states which are the same multiple of each other.
The interpretation of the matrices is that they act as generators of motions on the space of states.
For example, the motion generated by can be found by solving the Heisenberg equation of motion using as a Hamiltonian,
These are translations of the matrix by a multiple of the identity matrix,
This is the interpretation of the derivative operator : , the exponential of a derivative operator is a translation (so Lagrange's shift operator).
The operator likewise generates translations in . The Hamiltonian generates translations in time, the angular momentum generates rotations in physical space, and the operator generates rotations in phase space.
When a transformation, like a rotation in physical space, commutes with the Hamiltonian, the transformation is called a symmetry (behind a degeneracy) of the Hamiltonian – the Hamiltonian expressed in terms of rotated coordinates is the same as the original Hamiltonian. This means that the change in the Hamiltonian under the infinitesimal symmetry generator vanishes,
It then follows that the change in the generator under time translation also vanishes,
so that the matrix is constant in time: it is conserved.
The one-to-one association of infinitesimal symmetry generators and conservation laws was discovered by Emmy Noether for classical mechanics, where the commutators are Poisson brackets, but the quantum-mechanical reasoning is identical. In quantum mechanics, any unitary symmetry transformation yields a conservation law, since if the matrix U has the property that
so it follows that
and that the time derivative of is zero – it is conserved.
The eigenvalues of unitary matrices are pure phases, so that the value of a unitary conserved quantity is a complex number of unit magnitude, not a real number. Another way of saying this is that a unitary matrix is the exponential of times a Hermitian matrix, so that the additive conserved real quantity, the phase, is only well-defined up to an integer multiple of . Only when the unitary symmetry matrix is part of a family that comes arbitrarily close to the identity are the conserved real quantities single-valued, and then the demand that they are conserved become a much more exacting constraint.
Symmetries which can be continuously connected to the identity are called continuous, and translations, rotations, and boosts are examples. Symmetries which cannot be continuously connected to the identity are discrete, and the operation of space-inversion, or parity, and charge conjugation are examples.
The interpretation of the matrices as generators of canonical transformations is due to Paul Dirac. The correspondence between symmetries and matrices was shown by Eugene Wigner to be complete, if antiunitary matrices which describe symmetries which include time-reversal are included.
Selection rules
It was physically clear to Heisenberg that the absolute squares of the matrix elements of , which are the Fourier coefficients of the oscillation, would yield the rate of emission of electromagnetic radiation.
In the classical limit of large orbits, if a charge with position and charge is oscillating next to an equal and opposite charge at position 0, the instantaneous dipole moment is , and the time variation of this moment translates directly into the space-time variation of the vector potential, which yields nested outgoing spherical waves.
For atoms, the wavelength of the emitted light is about 10,000 times the atomic radius, and the dipole moment is the only contribution to the radiative field, while all other details of the atomic charge distribution can be ignored.
Ignoring back-reaction, the power radiated in each outgoing mode is a sum of separate contributions from the square of each independent time Fourier mode of ,
Now, in Heisenberg's representation, the Fourier coefficients of the dipole moment are the matrix elements of . This correspondence allowed Heisenberg to provide the rule for the transition intensities, the fraction of the time that, starting from an initial state , a photon is emitted and the atom jumps to a final state ,
This then allowed the magnitude of the matrix elements to be interpreted statistically: they give the intensity of the spectral lines, the probability for quantum jumps from the emission of dipole radiation.
Since the transition rates are given by the matrix elements of , wherever is zero, the corresponding transition should be absent. These were called the selection rules, which were a puzzle until the advent of matrix mechanics.
An arbitrary state of the hydrogen atom, ignoring spin, is labelled by , where the value of is a measure of the total orbital angular momentum and is its -component, which defines the orbit orientation. The components of the angular momentum pseudovector are
where the products in this expression are independent of order and real, because different components of and commute.
The commutation relations of with all three coordinate matrices , , (or with any vector) are easy to find,
which confirms that the operator generates rotations between the three components of the vector of coordinate matrices .
From this, the commutator of and the coordinate matrices , , can be read off,
This means that the quantities and have a simple commutation rule,
Just like the matrix elements of and for the harmonic oscillator Hamiltonian, this commutation law implies that these operators only have certain off diagonal matrix elements in states of definite ,
meaning that the matrix takes an eigenvector of with eigenvalue to an eigenvector with eigenvalue . Similarly, decrease by one unit, while does not change the value of .
So, in a basis of states where and have definite values, the matrix elements of any of the three components of the position are zero, except when is the same or changes by one unit.
This places a constraint on the change in total angular momentum. Any state can be rotated so that its angular momentum is in the -direction as much as possible, where . The matrix element of the position acting on can only produce values of which are bigger by one unit, so that if the coordinates are rotated so that the final state is , the value of can be at most one bigger than the biggest value of that occurs in the initial state. So is at most .
The matrix elements vanish for , and the reverse matrix element is determined by Hermiticity, so these vanish also when : Dipole transitions are forbidden with a change in angular momentum of more than one unit.
Sum rules
The Heisenberg equation of motion determines the matrix elements of in the Heisenberg basis from the matrix elements of .
which turns the diagonal part of the commutation relation into a sum rule for the magnitude of the matrix elements:
This yields a relation for the sum of the spectroscopic intensities to and from any given state, although to be absolutely correct, contributions from the radiative capture probability for unbound scattering states must be included in the sum:
See also
Interaction picture
Bra–ket notation
Introduction to quantum mechanics
Heisenberg's entryway to matrix mechanics
References
Further reading
Max Born The statistical interpretation of quantum mechanics. Nobel Lecture – December 11, 1954.
Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005) . Also published in Germany: Max Born - Baumeister der Quantenwelt. Eine Biographie (Spektrum Akademischer Verlag, 2005), .
Max Jammer The Conceptual Development of Quantum Mechanics (McGraw-Hill, 1966)
Jagdish Mehra and Helmut Rechenberg The Historical Development of Quantum Theory. Volume 3. The Formulation of Matrix Mechanics and Its Modifications 1925–1926. (Springer, 2001)
B. L. van der Waerden, editor, Sources of Quantum Mechanics (Dover Publications, 1968)
Thomas F. Jordan, Quantum Mechanics in Simple Matrix Form, (Dover publications, 2005)
External links
An Overview of Matrix Mechanics
Matrix Methods in Quantum Mechanics
Heisenberg Quantum Mechanics (The theory's origins and its historical developing 1925–27)
Werner Heisenberg 1970 CBC radio Interview
On Matrix Mechanics at MathPages
Quantum mechanics | Matrix mechanics | [
"Physics"
] | 8,589 | [
"Theoretical physics",
"Quantum mechanics"
] |
396,459 | https://en.wikipedia.org/wiki/William%20Henry%20Bragg | Sir William Henry Bragg (2 July 1862 – 12 March 1942) was an English physicist, chemist, mathematician, and active sportsman who uniquely shared a Nobel Prize with his son Lawrence Bragg – the 1915 Nobel Prize in Physics: "for their services in the analysis of crystal structure by means of X-rays". The mineral Braggite is named after him and his son. He was knighted in 1920.
Biography
Early years
Bragg was born at Westward, near Wigton, Cumberland, England, the son of Robert John Bragg, a merchant marine officer and farmer, and his wife Mary née Wood, a clergyman's daughter. When Bragg was seven years old, his mother died, and he was raised by his uncle, also named William Bragg, at Market Harborough, Leicestershire. He was educated at the Grammar School there, at King William's College on the Isle of Man and, having won an exhibition (scholarship), at Trinity College, Cambridge. He graduated in 1884 as third wrangler, and in 1885 was awarded a first class honours in the mathematical tripos.
University of Adelaide
In 1885, at the age of 23, Bragg was appointed (Sir Thomas) Elder Professor of Mathematics and Experimental Physics in the University of Adelaide, Australia, and started work there early in 1886. Being a skilled mathematician, at that time he had limited knowledge of physics, most of which was in the form of applied mathematics he had learnt at Trinity. Also at that time, there were only about a hundred students doing full courses at Adelaide, of whom less than a handful belonged to the science school, whose deficient teaching facilities Bragg improved by apprenticing himself to a firm of instrument makers. Bragg was an able and popular lecturer; he encouraged the formation of the student union, and the attendance, free of charge, of science teachers at his lectures.
Bragg's interest in physics developed, particularly in the field of electromagnetism. In 1895, he was visited by Ernest Rutherford, en route from New Zealand to Cambridge; this was the commencement of a lifelong friendship. Bragg had a keen interest in the new discovery of X-rays by Wilhelm Röntgen. On 29 May 1896 at Adelaide, Bragg demonstrated before a meeting of local doctors the application of "X-rays to reveal structures that were otherwise invisible". Samuel Barbour, senior chemist of F. H. Faulding & Co., an Adelaide pharmaceutical manufacturer, supplied the necessary apparatus in the form of a Crookes tube, a glass discharge tube. The tube had been obtained at Leeds, England, where Barbour visited the firm of Reynolds and Branson, a manufacturer of photographic and laboratory equipment. Barbour returned to Adelaide in April 1896. Barbour had conducted his own experiments shortly after return to Australia, but results were limited due to limited battery power. At the University, the tube was attached to an induction coil and a battery borrowed from Sir Charles Todd, Bragg's father-in-law. The induction coil was utilized to produce the electric spark necessary for Bragg and Barbour to "generate short bursts of X-rays". The audience was favorably impressed. Bragg availed himself as a test subject, in the manner of Röntgen and allowed an X-ray photograph to be taken of his hand. The image of the fingers in his hand revealed "an old injury to one of his fingers sustained when using the turnip chopping machine on his father's farm in Cumbria".
As early as 1895, Bragg was working on wireless telegraphy, though public lectures and demonstrations focussed on his X-ray research which would later lead to his Nobel Prize. In a hurried visit by Rutherford, he was reported as working on a Hertzian oscillator. There were many common practical threads to the two technologies and he was ably assisted in the laboratory by Arthur Lionel Rogers who manufactured much of the equipment. On 21 September 1897 Bragg gave the first recorded public demonstration of the working of wireless telegraphy in Australia during a lecture meeting at the University of Adelaide as part of the Public Teachers' Union conference. Bragg departed Adelaide in December 1897, and spent all of 1898 on a 12-month leave of absence, touring Great Britain and Europe and during this time visited Marconi and inspected his wireless facilities. He returned to Adelaide in early March 1899, and already on 13 May 1899, Bragg and his father-in-law, Sir Charles Todd, were conducting preliminary tests of wireless telegraphy with a transmitter at the Observatory and a receiver on the South Road (about 200 metres). Experiments continued throughout the southern winter of 1899 and the range was progressively extended to Henley Beach. In September the work was extended to two way transmissions with the addition of a second induction coil loaned by Mr. Oddie of Ballarat. It was desired to extend the experiments cross a sea path and Todd was interested in connecting Cape Spencer and Althorpe Island, but local costs were considered prohibitive while the charges for patented equipment from the Marconi Company were exorbitant. At the same time Bragg's interests were leaning towards X-rays and practical work in wireless in South Australia was largely dormant for the next decade.
The turning-point in Bragg's career came in 1904 when he gave the presidential address to section A of the Australasian Association for the Advancement of Science at Dunedin, New Zealand, on "Some Recent Advances in the Theory of the Ionization of Gases". This idea was followed up "in a brilliant series of researches" which, within three years, earned him a fellowship of the Royal Society of London. This paper was also the origin of his first book Studies in Radioactivity (1912). Soon after the delivery of his 1904 address, some radium bromide was made available to Bragg for experimentation. In December 1904 his paper "On the Absorption of α Rays and on the Classification of the α Rays from Radium" appeared in the Philosophical Magazine, and in the same issue a paper "On the Ionization Curves of Radium", written in collaboration with his student Richard Kleeman, also appeared.
At the end of 1908, Bragg returned to England. During his 23 years in Australia "he had seen the number of students at the University of Adelaide almost quadruple, and had a full share in the development of its excellent science school." He had returned to England on the maiden voyage of the SS Waratah, a ship which vanished at sea on its second voyage the next year. He had been alarmed at the ship's tendency to list during his voyage, and had concluded that the ship's metacentre was just below her centre of gravity. In 1911, he testified his belief that the Waratah was unstable at the Inquiry into the ship's disappearance.
There is a bust of William Bragg in North Terrace, Adelaide, South Australia.
University of Leeds
Bragg occupied the Cavendish chair of physics at the University of Leeds from 1909 until 1915. He continued his work on X-rays with much success. He invented the X-ray spectrometer and with his son, Lawrence Bragg, then a research student at Cambridge, founded the new science of X-ray crystallography, the analysis of crystal structure using X-ray diffraction.
World War I
Both of his sons (Lawrence and Robert) were called into the army after war broke out in 1914 . The following year he was appointed Quain Professor of Physics at University College London. He had to wait for almost a year to contribute to the war effort: finally, in July 1915, he was appointed to the Board of Invention and Research set up by the Admiralty. In September, his younger son Robert died of wounds at Gallipoli. In November, he shared the Nobel Prize in Physics with elder son William Lawrence. The Navy was struggling to prevent sinkings by unseen, submerged U boats. The scientists recommended that the best tactic was to listen for the submarines. The Navy had a hydrophone research establishment at Aberdour Scotland, staffed with navy men. In November 1915, two young physicists were added to its staff. Bowing to outside pressure to use science, in July 1916, the Admiralty appointed Bragg as scientific director at Aberdour, assisted by three additional young physicists. They developed an improved directional hydrophone, which finally convinced the Admiralty of their usefulness. Late in 1916, Bragg with his small group moved to Harwich, where the staff was enlarged and they had access to a submarine for tests. In France, where scientists had been mobilized since the beginning of the war, the physicist Paul Langevin made a major stride with echolocation, generating intense sound pulses with quartz sheets oscillated at high frequency, which were then used as microphones to listen for echoes. Quartz was usable when vacuum tubes became available at the end of 1917 to amplify the faint signals. The British made sonar practicable by using mosaics of small quartz bits rather than slices from a large crystal. In January 1918, Bragg moved into the Admiralty as head of scientific research in the anti-submarine division. By war's end British vessels were being equipped with sonar manned by trained listeners.
Inspired by William Lawrence's methods for locating enemy guns by the sound of their firing, the output from six microphones miles apart along the coast were recorded on moving photographic film. Sound ranging is much more accurate in the sea than in the turbulent atmosphere. They were able to localize the sites of distant explosions, which were used to obtain the precise positions of British warships and of minefields.
University College London
After the war Bragg returned to University College London, where he continued to work on crystal analysis.
Royal Institution
From 1923, he was Fullerian Professor of Chemistry at the Royal Institution and director of the Davy Faraday Research Laboratory. This institution was practically rebuilt in 1929–30 and, under Bragg's directorship many valuable papers were issued from the laboratory. In 1919, 1923, 1925 and 1931 he was invited to deliver the Royal Institution Christmas Lecture on The World of Sound; Concerning the Nature of Things, Old Trades and New Knowledge and The Universe of Life respectively.
The Royal Society and the coming war
Bragg was elected president of the Royal Society in 1935. The physiologist A. V. Hill was biological secretary and soon A. C. G. Egerton became physical secretary. During World War I all three had stood by for frustrating months before their skills were employed for the war effort. Now the cause of science was strengthened by the report of a high-level Army committee on lessons learned in the last war; their first recommendation was to "keep abreast of modern scientific developments". Anticipating another war, the Ministry of Labour was persuaded to accept Hill as a consultant on scientific manpower. The Royal Society compiled a register of qualified men. They proposed a small committee on science to advise the Committee on Imperial Defence, but this was rejected. Finally in 1940, as Bragg's term ended, a scientific advisory committee to the War Cabinet was appointed. Brag was among the 2,300 names of prominent persons listed on the Nazis' Special Search List, of those who were to be arrested on the invasion of Great Britain and turned over to the Gestapo. Bragg died in 1942.
Honours and awards
Bragg was joint winner with his son, Lawrence Bragg, of the Nobel Prize in Physics in 1915: "For their services in the analysis of crystal structure by means of X-ray".
Bragg was elected Fellow of the Royal Society in 1907, vice-president in 1920, and served as President of the Royal Society from 1935 to 1940. He was elected an International Member of the United States National Academy of Sciences in 1939 and an International Member of the American Philosophical Society. He was elected as a member of the Royal Academy of Science, Letters and Fine Arts of Belgium on 1 June 1946.
He was appointed Commander of the Order of the British Empire (CBE) in 1917 and Knight Commander (KBE) in the 1920 civilian war honours. He was admitted to the Order of Merit in 1931.
Matteucci Medal (1915)
Rumford Medal (1916)
Copley Medal (1930)
Franklin Medal (1930)
John J. Carty Award of the National Academy of Sciences (1939)
The current Electoral district of Bragg, in the South Australian House of Assembly, was created in 1970, and was named after both William and Lawrence Bragg.
Private life
In 1889, in Adelaide, Bragg married Gwendoline Todd, a skilled water-colour painter, and daughter of astronomer, meteorologist and electrical engineer Sir Charles Todd. They had three children, a daughter, Gwendolen and two sons, William Lawrence, born in 1890 in North Adelaide, and Robert. Gwendolen married the English architect Alban Caroe, Bragg taught William at the University of Adelaide, and Robert was killed in the Battle of Gallipoli. Bragg's wife, Gwendoline, died in 1929.
Bragg played tennis and golf, and as a founding member of the North Adelaide and Adelaide University Lacrosse Clubs, contributed to the introduction of lacrosse to South Australia and was also the secretary of the Adelaide University Chess Association.
Bragg died in 1942 in England and was survived by his daughter Gwendolen and his son, Lawrence.
Legacy
The lecture theatre of King William's College (KWC) is named in memory of Bragg; the Sixth-Form invitational literary debating society at KWC, the Bragg Society, is also named in his memory. One of the school "Houses" at Robert Smyth School, Market Harborough, Leicester, is named "Bragg" in memory of him being a student there. Since 1992, the Australian Institute of Physics has awarded The Bragg Gold Medal for Excellence in Physics for the best PhD thesis by a student at an Australian university. The two sides of the medal contain the images of Sir William Henry and his son Sir Lawrence Bragg.
The Experimental Technique Centre at Brunel University is named the Bragg Building. The Sir William Henry Bragg Building at the University of Leeds opened in 2021.
In 1962, the Bragg Laboratories were constructed at the University of Adelaide to commemorate 100 years since the birth of Sir William H. Bragg.
The Australian Bragg Centre for Proton Therapy and Research also in Adelaide, Australia was completed in late 2023. It is named for both father and son and offers radiation therapy for cancer patients.
In August 2013, Bragg's relative, the broadcaster Melvyn Bragg, presented a BBC Radio 4 programme "Bragg on the Braggs" on the 1915 Nobel Prize in Physics winners.
Publications
William Henry Bragg, William Lawrence Bragg, "X Rays and Crystal Structure", G. Bell & Son, London, 1915.
William Henry Bragg, The World of Sound (1920)
William Henry Bragg, The Crystalline State – The Romanes Lecture for 1925. Oxford, 1925.
William Henry Bragg, Concerning the Nature of Things (1925)
William Henry Bragg, Old Trades and New Knowledge (1926)
William Henry Bragg, An Introduction to Crystal Analysis (1928)
William Henry Bragg, The Universe of Light (1933)
See also
George Gamow – has 1931 photograph with Bragg, location unspecified.
List of Nobel laureates in Physics
List of presidents of the Royal Society
References
Further reading
"[a] most valuable record of his work and picture of his personality is the excellent obituary written by Professor Andrade of London University for the Royal Society of London." Statement made by Sir Kerr Grant, in:
"The Life and work of Sir William Bragg", the John Murtagh Macrossan Memorial Lecture for 1950, University of Queensland. Written and presented by Sir Kerr Grant, Emeritus Professor of Physics, University of Adelaide. Reproduced as pages 5–37 of Bragg Centenary, 1886–1986, University of Adelaide.
"William and Lawrence Bragg, Father and Son: The Most Extraordinary Collaboration in Science", John Jenkin, Oxford University Press 2008.
Ross, John F. A History of Radio in South Australia 1897–1977 (J. F. Ross, 1978)
External links
Data from the University of Leeds
Fullerian Professorships
Nobelprize.org – The Nobel Prize for Physics in 1915
1862 births
1942 deaths
20th-century British physicists
Academics of University College London
Alumni of Trinity College, Cambridge
Australian people of English descent
Australian lacrosse players
British lacrosse players
Experimental physicists
Optical physicists
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Members of the Royal Academy of Belgium
Academics of the University of Leeds
Knights Commander of the Order of the British Empire
Members of the Order of Merit
Nobel laureates in Physics
People from Wigton
Presidents of the Royal Society
Recipients of the Copley Medal
Academic staff of the University of Adelaide
People educated at King William's College
Presidents of the Institute of Physics
Presidents of the Physical Society
X-ray crystallography
British crystallographers
English Nobel laureates
British Nobel laureates
X-ray pioneers
Leeds Blue Plaques
Recipients of the Matteucci Medal
Presidents of the International Union of Pure and Applied Physics
Cavendish Professors of Physics
Members of the American Philosophical Society
Recipients of Franklin Medal | William Henry Bragg | [
"Chemistry",
"Materials_science"
] | 3,442 | [
"X-ray crystallography",
"Crystallography"
] |
396,507 | https://en.wikipedia.org/wiki/Percy%20Williams%20Bridgman | Percy Williams Bridgman (April 21, 1882 – August 20, 1961) was an American physicist who received the 1946 Nobel Prize in Physics for his work on the physics of high pressures. He also wrote extensively on the scientific method and on other aspects of the philosophy of science. The Bridgman effect, the Bridgman–Stockbarger technique, and the high-pressure mineral bridgmanite are named after him.
Biography
Early life
Bridgman was born in Cambridge, Massachusetts, and grew up in nearby Auburndale.
Bridgman's parents were both born in New England. His father, Raymond Landon Bridgman, was "profoundly religious and idealistic" and worked as a newspaper reporter assigned to state politics. His mother, Mary Ann Maria Williams, was described as "more conventional, sprightly, and competitive".
Bridgman attended both elementary and high school in Auburndale, where he excelled at competitions in the classroom, on the playground, and while playing chess. Described as both shy and proud, his home life consisted of family music, card games, and domestic and garden chores. The family was deeply religious; reading the Bible each morning and attending a Congregational Church. However, Bridgman later became an atheist.
Education and professional life
Bridgman entered Harvard University in 1900, and studied physics through to his PhD. From 1910 until his retirement, he taught at Harvard, becoming a full professor in 1919. In 1905, he began investigating the properties of matter under high pressure. A machinery malfunction led him to modify his pressure apparatus; the result was a new sealing device enabling him to create pressures eventually exceeding 100,000 kgf/cm2 (10 GPa; 100,000 atmospheres). This was a huge improvement over previous machinery, which could achieve pressures of only 3,000 kgf/cm2 (0.3 GPa). This new apparatus led to an abundance of new findings, including a study of the compressibility, electric and thermal conductivity, tensile strength and viscosity of more than 100 different compounds. Bridgman is also known for his studies of electrical conduction in metals and properties of crystals. He developed the Bridgman seal and is the eponym for Bridgman's thermodynamic equations, which were used to further his research.
Bridgman made many improvements to his high-pressure apparatus over the years, and unsuccessfully attempted the synthesis of diamond many times. The high-pressure torsion apparatus developed by Bridgman significantly contributed to the development of severe plastic deformation field decades later.
His philosophy of science book The Logic of Modern Physics (1927) advocated operationalism and coined the term operational definition. In 1938 he participated in the International Committee composed to organise the International Congresses for the Unity of Science. He was also one of the 11 signatories to the Russell–Einstein Manifesto.
J. Robert Oppenheimer, the director of the Manhattan Project, was an undergraduate student of Bridgman's. Of his teaching abilities, Oppenheimer said that, “I found Bridgman a wonderful teacher because he never really was quite reconciled to things being the way they were and he always thought them out.”
Home life and death
Bridgman married Olive Ware (1882-1972), of Hartford, Connecticut, in 1912. Ware's father, Edmund Asa Ware, was the founder and first president of Atlanta University. The couple had two children and were married for nearly 50 years, living most of that time in Cambridge. The family also had a summer home in Randolph, New Hampshire, where Bridgman was known as a skilled mountain climber.
Bridgman was a "penetrating analytical thinker" with a "fertile mechanical imagination" and exceptional manual dexterity. He was a skilled plumber and carpenter, known to shun the assistance of professionals in these matters. He was also fond of music and played the piano, and took pride in his flower and vegetable gardens.
Bridgman committed suicide by gunshot after suffering from metastatic cancer for some time. His suicide note was a mere two sentences; "It isn't decent for society to make a man do this thing himself. Probably this is the last day I will be able to do it myself." Bridgman's words have been quoted by many in the assisted suicide debate.
Honors and awards
Bridgman received Doctors, honoris causa from Stevens Institute (1934), Harvard (1939), Brooklyn Polytechnic (1941), Princeton (1950), Paris (1950), and Yale (1951). He received the Bingham Medal (1951) from the Society of Rheology, the Rumford Prize from the American Academy of Arts and Sciences (1919), the Elliott Cresson Medal (1932) from the Franklin Institute, the Gold Medal from Bakhuys Roozeboom Fund (founder Hendrik Willem Bakhuis Roozeboom) (1933) from the Royal Netherlands Academy of Arts and Sciences, and the Comstock Prize (1933) of the National Academy of Sciences.
Bridgman was a member of the American Physical Society and was its president in 1942. He was also a member of the American Association for the Advancement of Science, the American Academy of Arts and Sciences, the American Philosophical Society, and the National Academy of Sciences. He was a Foreign Member of the Royal Society and Honorary Fellow of the Physical Society of London.
The Percy W. Bridgman House, in Massachusetts, is a U.S. National Historic Landmark designated in 1975.
In 2014, the Commission on New Minerals, Nomenclature and Classification of the International Mineralogical Association approved the name bridgmanite for perovskite-structured ,the Earth's most abundant mineral, in honor of his high-pressure research.
Bibliography
Online excerpt.
See also
Bridgmanite, the most abundant mineral in Earth's mantle, named after Bridgman
Bridgman's black
Pascalization, also called bridgmanization
Percy W. Bridgman House
Phases of ice, discovery of high pressure forms of water was published by P.W. Bridgman in 1912
References
Further reading
Walter, Maila L., 1991. Science and Cultural Crisis: An Intellectual Biography of Percy Williams Bridgman (1882–1961). Stanford Univ. Press.
External links
National Academy of Sciences Biographical Memoir
Percy Williams Bridgman at PhilPapers.
1882 births
1961 suicides
1961 deaths
20th-century American physicists
American Nobel laureates
American atheists
American experimental physicists
Foreign members of the Royal Society
Former Congregationalists
Harvard University alumni
Harvard University faculty
High pressure science
Hollis Chair of Mathematics and Natural Philosophy
Mathematicians from Massachusetts
Members of the United States National Academy of Sciences
Nobel laureates in Physics
Scientists from Cambridge, Massachusetts
Scientists from Newton, Massachusetts
Rheologists
Suicides by firearm in New Hampshire
Thermodynamicists
Presidents of the American Physical Society | Percy Williams Bridgman | [
"Physics",
"Chemistry"
] | 1,410 | [
"High pressure science",
"Applied and interdisciplinary physics",
"Thermodynamics",
"Thermodynamicists"
] |
396,531 | https://en.wikipedia.org/wiki/Edward%20Mills%20Purcell | Edward Mills Purcell (August 30, 1912 – March 7, 1997) was an American physicist who shared the 1952 Nobel Prize for Physics for his independent discovery (published 1946) of nuclear magnetic resonance in liquids and in solids. Nuclear magnetic resonance (NMR) has become widely used to study the molecular structure of pure materials and the composition of mixtures. Friends and colleagues knew him as Ed Purcell.
Biography
Born and raised in Taylorville, Illinois, Purcell received his BSEE in electrical engineering from Purdue University, followed by his M.A. and Ph.D. in physics from Harvard University. He was a member of the Alpha Xi chapter of the Phi Kappa Sigma fraternity while at Purdue. After spending the years of World War II working at the MIT Radiation Laboratory on the development of microwave radar, Purcell returned to Harvard to do research. In December 1945, he discovered nuclear magnetic resonance (NMR) with his colleagues Robert Pound and Henry Torrey. NMR provides scientists with an elegant and precise way of determining chemical structure and properties of materials, and is widely used in physics and chemistry. It also is the basis of magnetic resonance imaging (MRI), one of the most important medical advances of the 20th century. For his discovery of NMR, Purcell shared the 1952 Nobel Prize in physics with Felix Bloch of Stanford University.
Purcell also made contributions to astronomy as the first to detect radio emissions from neutral galactic hydrogen (the famous 21 cm line due to hyperfine splitting), affording the first views of the spiral arms of the Milky Way. This observation helped launch the field of radio astronomy, and measurements of the 21 cm line are still an important technique in modern astronomy. He has also made seminal contributions to solid state physics, with studies of spin-echo relaxation, nuclear magnetic relaxation, and negative spin temperature (important in the development of the laser). With Norman F. Ramsey, he was the first to question the CP symmetry of particle physics.
Purcell was the recipient of many awards for his scientific, educational, and civic work. He served as science advisor to Presidents Dwight D. Eisenhower, John F. Kennedy, and Lyndon B. Johnson. He was president of the American Physical Society, and a member of the American Philosophical Society, the National Academy of Sciences, and the American Academy of Arts and Sciences. He was awarded the National Medal of Science in 1979, and the Jansky Lectureship before the National Radio Astronomy Observatory. Purcell was also inducted into his Fraternity's (Phi Kappa Sigma) Hall of Fame as the first Phi Kap ever to receive a Nobel Prize.
Purcell was the author of the innovative introductory text Electricity and Magnetism. The book, a Sputnik-era project funded by an NSF grant, was influential for its use of relativity in the presentation of the subject at this level. The 1965 edition, now freely available due to a condition of the federal grant, was originally published as a volume of the Berkeley Physics Course. The book is also in print as a commercial third edition, as Purcell and Morin. Purcell is also remembered by biologists for his famous lecture "Life at Low Reynolds Number", in which he explained forces and effects dominating in limiting flow regimes (often at the micro scale). He also emphasized the time-reversibility of low Reynolds number flows with a principle referred to as the Scallop theorem.
Purcell died on March 7, 1997, in Cambridge, Massachusetts, aged 84.
See also
Dynamical decoupling
J-coupling
Magnetic resonance imaging (MRI)
Neutron electric dipole moment
Spin echo
Relativistic electromagnetism
List of textbooks in electromagnetism
References
External links
1912 births
1997 deaths
20th-century American physicists
American Nobel laureates
American nuclear physicists
American experimental physicists
Fellows of the American Academy of Arts and Sciences
Harvard University alumni
Harvard University faculty
Massachusetts Institute of Technology faculty
National Medal of Science laureates
Nobel laureates in Physics
People from Taylorville, Illinois
Purdue University College of Engineering alumni
Winners of the Beatrice M. Tinsley Prize
Members of the United States National Academy of Sciences
Foreign members of the Royal Society
Nuclear magnetic resonance
Time Person of the Year
Presidents of the American Physical Society | Edward Mills Purcell | [
"Physics",
"Chemistry"
] | 841 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
21,210 | https://en.wikipedia.org/wiki/Niels%20Bohr | Niels Henrik David Bohr (7 October 1885 – 18 November 1962) was a Danish theoretical physicist who made foundational contributions to understanding atomic structure and quantum theory, for which he received the Nobel Prize in Physics in 1922. Bohr was also a philosopher and a promoter of scientific research.
Bohr developed the Bohr model of the atom, in which he proposed that energy levels of electrons are discrete and that the electrons revolve in stable orbits around the atomic nucleus but can jump from one energy level (or orbit) to another. Although the Bohr model has been supplanted by other models, its underlying principles remain valid. He conceived the principle of complementarity: that items could be separately analysed in terms of contradictory properties, like behaving as a wave or a stream of particles. The notion of complementarity dominated Bohr's thinking in both science and philosophy.
Bohr founded the Institute of Theoretical Physics at the University of Copenhagen, now known as the Niels Bohr Institute, which opened in 1920. Bohr mentored and collaborated with physicists including Hans Kramers, Oskar Klein, George de Hevesy, and Werner Heisenberg. He predicted the properties of a new zirconium-like element, which was named hafnium, after the Latin name for Copenhagen, where it was discovered. Later, the synthetic element bohrium was named after him because of his groundbreaking work on the structure of atoms.
During the 1930s, Bohr helped refugees from Nazism. After Denmark was occupied by the Germans, he met with Heisenberg, who had become the head of the German nuclear weapon project. In September 1943 word reached Bohr that he was about to be arrested by the Germans, so he fled to Sweden. From there, he was flown to Britain, where he joined the British Tube Alloys nuclear weapons project, and was part of the British mission to the Manhattan Project. After the war, Bohr called for international cooperation on nuclear energy. He was involved with the establishment of CERN and the Research Establishment Risø of the Danish Atomic Energy Commission and became the first chairman of the Nordic Institute for Theoretical Physics in 1957.
Early life
Niels Henrik David Bohr was born in Copenhagen, Denmark, on 7 October 1885, the second of three children of Christian Bohr, a professor of physiology at the University of Copenhagen, and his wife Ellen Adler, who came from a wealthy Jewish banking family. He had an elder sister, Jenny, and a younger brother Harald. Jenny became a teacher, while Harald became a mathematician and footballer who played for the Danish national team at the 1908 Summer Olympics in London. Niels was a passionate footballer as well, and the two brothers played several matches for the Copenhagen-based Akademisk Boldklub (Academic Football Club), with Niels as goalkeeper.
Bohr was educated at Gammelholm Latin School, starting when he was seven. In 1903, Bohr enrolled as an undergraduate at Copenhagen University. His major was physics, which he studied under Professor Christian Christiansen, the university's only professor of physics at that time. He also studied astronomy and mathematics under Professor Thorvald Thiele, and philosophy under Professor Harald Høffding, a friend of his father.
In 1905 a gold medal competition was sponsored by the Royal Danish Academy of Sciences and Letters to investigate a method for measuring the surface tension of liquids that had been proposed by Lord Rayleigh in 1879. This involved measuring the frequency of oscillation of the radius of a water jet. Bohr conducted a series of experiments using his father's laboratory in the university; the university itself had no physics laboratory. To complete his experiments, he had to make his own glassware, creating test tubes with the required elliptical cross-sections. He went beyond the original task, incorporating improvements into both Rayleigh's theory and his method, by taking into account the viscosity of the water, and by working with finite amplitudes instead of just infinitesimal ones. His essay, which he submitted at the last minute, won the prize. He later submitted an improved version of the paper to the Royal Society in London for publication in the Philosophical Transactions of the Royal Society.
Harald became the first of the two Bohr brothers to earn a master's degree, which he earned for mathematics in April 1909. Niels took another nine months to earn his on the electron theory of metals, a topic assigned by his supervisor, Christiansen. Bohr subsequently elaborated his master's thesis into his much-larger Doctor of Philosophy thesis. He surveyed the literature on the subject, settling on a model postulated by Paul Drude and elaborated by Hendrik Lorentz, in which the electrons in a metal are considered to behave like a gas. Bohr extended Lorentz's model, but was still unable to account for phenomena like the Hall effect, and concluded that electron theory could not fully explain the magnetic properties of metals. The thesis was accepted in April 1911, and Bohr conducted his formal defence on 13 May. Harald had received his doctorate the previous year. Bohr's thesis was groundbreaking, but attracted little interest outside Scandinavia because it was written in Danish, a Copenhagen University requirement at the time. In 1921, the Dutch physicist Hendrika Johanna van Leeuwen would independently derive a theorem in Bohr's thesis that is today known as the Bohr–Van Leeuwen theorem.
In 1910, Bohr met Margrethe Nørlund, the sister of the mathematician Niels Erik Nørlund. Bohr resigned his membership in the Church of Denmark on 16 April 1912, and he and Margrethe were married in a civil ceremony at the town hall in Slagelse on 1 August. Years later, his brother Harald similarly left the church before getting married. Bohr and Margrethe had six sons. The oldest, Christian, died in a boating accident in 1934, and another, Harald, was severely mentally disabled. He was placed in an institution away from his family's home at the age of four and died from childhood meningitis six years later. Aage Bohr became a successful physicist, and in 1975 was awarded the Nobel Prize in physics, like his father. A son of Aage, Vilhelm A. Bohr, is a scientist affiliated with the University of Copenhagen and the National Institute on Aging in the U.S. became a physician; , a chemical engineer; and Ernest, a lawyer. Like his uncle Harald, Ernest Bohr became an Olympic athlete, playing field hockey for Denmark at the 1948 Summer Olympics in London.
Physics
Bohr model
In September 1911, Bohr, supported by a fellowship from the Carlsberg Foundation, travelled to England, where most of the theoretical work on the structure of atoms and molecules was being done. He met J. J. Thomson of the Cavendish Laboratory and Trinity College, Cambridge. He attended lectures on electromagnetism given by James Jeans and Joseph Larmor, and did some research on cathode rays, but failed to impress Thomson. He had more success with younger physicists like the Australian William Lawrence Bragg, and New Zealand's Ernest Rutherford, whose 1911 small central nucleus Rutherford model of the atom had challenged Thomson's 1904 plum pudding model. Bohr received an invitation from Rutherford to conduct post-doctoral work at Victoria University of Manchester, where Bohr met George de Hevesy and Charles Galton Darwin (whom Bohr referred to as "the grandson of the real Darwin").
Bohr returned to Denmark in July 1912 for his wedding, and travelled around England and Scotland on his honeymoon. On his return, he became a privatdocent at the University of Copenhagen, giving lectures on thermodynamics. Martin Knudsen put Bohr's name forward for a docent, which was approved in July 1913, and Bohr then began teaching medical students. His three papers, which later became famous as "the trilogy", were published in Philosophical Magazine in July, September and November of that year. He adapted Rutherford's nuclear structure to Max Planck's quantum theory and so created his Bohr model of the atom.
Planetary models of atoms were not new, but Bohr's treatment was. Taking the 1912 paper by Darwin on the role of electrons in the interaction of alpha particles with a nucleus as his starting point, he advanced the theory of electrons travelling in orbits of quantised "stationary states" around the atom's nucleus in order to stabilise the atom, but it wasn't until his 1921 paper that he showed that the chemical properties of each element were largely determined by the number of electrons in the outer orbits of its atoms. He introduced the idea that an electron could drop from a higher-energy orbit to a lower one, in the process emitting a quantum of discrete energy. This became a basis for what is now known as the old quantum theory.
In 1885, Johann Balmer had come up with his Balmer series to describe the visible spectral lines of a hydrogen atom:
where λ is the wavelength of the absorbed or emitted light and RH is the Rydberg constant. Balmer's formula was corroborated by the discovery of additional spectral lines, but for thirty years, no one could explain why it worked. In the first paper of his trilogy, Bohr was able to derive it from his model:
where me is the electron's mass, e is its charge, h is the Planck constant and Z is the atom's atomic number (1 for hydrogen).
The model's first hurdle was the Pickering series, lines that did not fit Balmer's formula. When challenged on this by Alfred Fowler, Bohr replied that they were caused by ionised helium, helium atoms with only one electron. The Bohr model was found to work for such ions. Many older physicists, like Thomson, Rayleigh and Hendrik Lorentz, did not like the trilogy, but the younger generation, including Rutherford, David Hilbert, Albert Einstein, Enrico Fermi, Max Born and Arnold Sommerfeld saw it as a breakthrough. The trilogy's acceptance was entirely due to its ability to explain phenomena that stymied other models, and to predict results that were subsequently verified by experiments. Today, the Bohr model of the atom has been superseded, but is still the best known model of the atom, as it often appears in high school physics and chemistry texts.
Bohr did not enjoy teaching medical students. He later admitted that he was not a good lecturer, because he needed a balance between clarity and truth, between "Klarheit und Wahrheit". He decided to return to Manchester, where Rutherford had offered him a job as a reader in place of Darwin, whose tenure had expired. Bohr accepted. He took a leave of absence from the University of Copenhagen, which he started by taking a holiday in Tyrol with his brother Harald and aunt Hanna Adler. There, he visited the University of Göttingen and the Ludwig Maximilian University of Munich, where he met Sommerfeld and conducted seminars on the trilogy. The First World War broke out while they were in Tyrol, greatly complicating the trip back to Denmark and Bohr's subsequent voyage with Margrethe to England, where he arrived in October 1914. They stayed until July 1916, by which time he had been appointed to the Chair of Theoretical Physics at the University of Copenhagen, a position created especially for him. His docentship was abolished at the same time, so he still had to teach physics to medical students. New professors were formally introduced to King Christian X, who expressed his delight at meeting such a famous football player.
Institute of Physics
In April 1917, Bohr began a campaign to establish an Institute of Theoretical Physics. He gained the support of the Danish government and the Carlsberg Foundation, and sizeable contributions were also made by industry and private donors, many of them Jewish. Legislation establishing the institute was passed in November 1918. Now known as the Niels Bohr Institute, it opened on 3 March 1921, with Bohr as its director. His family moved into an apartment on the first floor. Bohr's institute served as a focal point for researchers into quantum mechanics and related subjects in the 1920s and 1930s, when most of the world's best-known theoretical physicists spent some time in his company. Early arrivals included Hans Kramers from the Netherlands, Oskar Klein from Sweden, George de Hevesy from Hungary, Wojciech Rubinowicz from Poland, and Svein Rosseland from Norway. Bohr became widely appreciated as their congenial host and eminent colleague. Klein and Rosseland produced the institute's first publication even before it opened.
The Bohr model worked well for hydrogen and ionized single-electron helium, which impressed Einstein but could not explain more complex elements. By 1919, Bohr was moving away from the idea that electrons orbited the nucleus and developed heuristics to describe them. The rare-earth elements posed a particular classification problem for chemists because they were so chemically similar. An important development came in 1924 with Wolfgang Pauli's discovery of the Pauli exclusion principle, which put Bohr's models on a firm theoretical footing. Bohr was then able to declare that the as-yet-undiscovered element 72 was not a rare-earth element but an element with chemical properties similar to those of zirconium. (Elements had been predicted and discovered since 1871 by chemical properties), and Bohr was immediately challenged by the French chemist Georges Urbain, who claimed to have discovered a rare-earth element 72, which he called "celtium". At the Institute in Copenhagen, Dirk Coster and George de Hevesy took up the challenge of proving Bohr right and Urbain wrong. Starting with a clear idea of the chemical properties of the unknown element greatly simplified the search process. They went through samples from Copenhagen's Museum of Mineralogy looking for a zirconium-like element and soon found it. The element, which they named hafnium (hafnia being the Latin name for Copenhagen), turned out to be more common than gold.
In 1922, Bohr was awarded the Nobel Prize in Physics "for his services in the investigation of the structure of atoms and of the radiation emanating from them". The award thus recognised both the trilogy and his early leading work in the emerging field of quantum mechanics. For his Nobel lecture, Bohr gave his audience a comprehensive survey of what was then known about the structure of the atom, including the correspondence principle, which he had formulated. This states that the behaviour of systems described by quantum theory reproduces classical physics in the limit of large quantum numbers.
The discovery of Compton scattering by Arthur Holly Compton in 1923 convinced most physicists that light was composed of photons and that energy and momentum were conserved in collisions between electrons and photons. In 1924, Bohr, Kramers, and John C. Slater, an American physicist working at the Institute in Copenhagen, proposed the Bohr–Kramers–Slater theory (BKS). It was more of a program than a full physical theory, as the ideas it developed were not worked out quantitatively. The BKS theory became the final attempt at understanding the interaction of matter and electromagnetic radiation on the basis of the old quantum theory, in which quantum phenomena were treated by imposing quantum restrictions on a classical wave description of the electromagnetic field.
Modelling atomic behaviour under incident electromagnetic radiation using "virtual oscillators" at the absorption and emission frequencies, rather than the (different) apparent frequencies of the Bohr orbits, led Max Born, Werner Heisenberg and Kramers to explore different mathematical models. They led to the development of matrix mechanics, the first form of modern quantum mechanics. The BKS theory also generated discussion of, and renewed attention to, difficulties in the foundations of the old quantum theory. The most provocative element of BKS – that momentum and energy would not necessarily be conserved in each interaction, but only statistically – was soon shown to be in conflict with experiments conducted by Walther Bothe and Hans Geiger. In light of these results, Bohr informed Darwin that "there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible".
Quantum mechanics
The introduction of spin by George Uhlenbeck and Samuel Goudsmit in November 1925 was a milestone. The next month, Bohr travelled to Leiden to attend celebrations of the 50th anniversary of Hendrick Lorentz receiving his doctorate. When his train stopped in Hamburg, he was met by Wolfgang Pauli and Otto Stern, who asked for his opinion of the spin theory. Bohr pointed out that he had concerns about the interaction between electrons and magnetic fields. When he arrived in Leiden, Paul Ehrenfest and Albert Einstein informed Bohr that Einstein had resolved this problem using relativity. Bohr then had Uhlenbeck and Goudsmit incorporate this into their paper. Thus, when he met Werner Heisenberg and Pascual Jordan in Göttingen on the way back, he had become, in his own words, "a prophet of the electron magnet gospel".
Heisenberg first came to Copenhagen in 1924, then returned to Göttingen in June 1925, shortly thereafter developing the mathematical foundations of quantum mechanics. When he showed his results to Max Born in Göttingen, Born realised that they could best be expressed using matrices. This work attracted the attention of the British physicist Paul Dirac, who came to Copenhagen for six months in September 1926. Austrian physicist Erwin Schrödinger also visited in 1926. His attempt at explaining quantum physics in classical terms using wave mechanics impressed Bohr, who believed it contributed "so much to mathematical clarity and simplicity that it represents a gigantic advance over all previous forms of quantum mechanics".
When Kramers left the institute in 1926 to take up a chair as professor of theoretical physics at the Utrecht University, Bohr arranged for Heisenberg to return and take Kramers's place as a lektor at the University of Copenhagen. Heisenberg worked in Copenhagen as a university lecturer and assistant to Bohr from 1926 to 1927.
Bohr became convinced that light behaved like both waves and particles and, in 1927, experiments confirmed the de Broglie hypothesis that matter (like electrons) also behaved like waves. He conceived the philosophical principle of complementarity: that items could have apparently mutually exclusive properties, such as being a wave or a stream of particles, depending on the experimental framework. He felt that it was not fully understood by professional philosophers.
In February 1927, Heisenberg developed the first version of the uncertainty principle, presenting it using a thought experiment where an electron was observed through a gamma-ray microscope. Bohr was dissatisfied with Heisenberg's argument, since it required only that a measurement disturb properties that already existed, rather than the more radical idea that the electron's properties could not be discussed at all apart from the context they were measured in. In a paper presented at the Volta Conference at Como in September 1927, Bohr emphasised that Heisenberg's uncertainty relations could be derived from classical considerations about the resolving power of optical instruments. Understanding the true meaning of complementarity would, Bohr believed, require "closer investigation". Einstein preferred the determinism of classical physics over the probabilistic new quantum physics to which he himself had contributed. Philosophical issues that arose from the novel aspects of quantum mechanics became widely celebrated subjects of discussion. Einstein and Bohr had good-natured arguments over such issues throughout their lives.
In 1914 Carl Jacobsen, the heir to Carlsberg breweries, bequeathed his mansion (the Carlsberg Honorary Residence, currently known as Carlsberg Academy) to be used for life by the Dane who had made the most prominent contribution to science, literature or the arts, as an honorary residence (). Harald Høffding had been the first occupant, and upon his death in July 1931, the Royal Danish Academy of Sciences and Letters gave Bohr occupancy. He and his family moved there in 1932. He was elected president of the Academy on 17 March 1939.
By 1929 the phenomenon of beta decay prompted Bohr to again suggest that the law of conservation of energy be abandoned, but Enrico Fermi's hypothetical neutrino and the subsequent 1932 discovery of the neutron provided another explanation. This prompted Bohr to create a new theory of the compound nucleus in 1936, which explained how neutrons could be captured by the nucleus. In this model, the nucleus could be deformed like a drop of liquid. He worked on this with a new collaborator, the Danish physicist Fritz Kalckar, who died suddenly in 1938.
The discovery of nuclear fission by Otto Hahn in December 1938 (and its theoretical explanation by Lise Meitner) generated intense interest among physicists. Bohr brought the news to the United States where he opened the Fifth Washington Conference on Theoretical Physics with Fermi on 26 January 1939. When Bohr told George Placzek that this resolved all the mysteries of transuranic elements, Placzek told him that one remained: the neutron capture energies of uranium did not match those of its decay. Bohr thought about it for a few minutes and then announced to Placzek, Léon Rosenfeld and John Wheeler that "I have understood everything." Based on his liquid drop model of the nucleus, Bohr concluded that it was the uranium-235 isotope and not the more abundant uranium-238 that was primarily responsible for fission with thermal neutrons. In April 1940, John R. Dunning demonstrated that Bohr was correct. In the meantime, Bohr and Wheeler developed a theoretical treatment, which they published in a September 1939 paper on "The Mechanism of Nuclear Fission".
Philosophy
Heisenberg said of Bohr that he was "primarily a philosopher, not a physicist". Bohr read the 19th-century Danish Christian existentialist philosopher Søren Kierkegaard. Richard Rhodes argued in The Making of the Atomic Bomb that Bohr was influenced by Kierkegaard through Høffding. In 1909, Bohr sent his brother Kierkegaard's Stages on Life's Way as a birthday gift. In the enclosed letter, Bohr wrote, "It is the only thing I have to send home; but I do not believe that it would be very easy to find anything better ... I even think it is one of the most delightful things I have ever read." Bohr enjoyed Kierkegaard's language and literary style, but mentioned that he had some disagreement with Kierkegaard's philosophy. Some of Bohr's biographers suggested that this disagreement stemmed from Kierkegaard's advocacy of Christianity, while Bohr was an atheist.
There has been some dispute over the extent to which Kierkegaard influenced Bohr's philosophy and science. David Favrholdt argued that Kierkegaard had minimal influence over Bohr's work, taking Bohr's statement about disagreeing with Kierkegaard at face value, while Jan Faye argued that one can disagree with the content of a theory while accepting its general premises and structure.
Bohr sat on the Board of Editors of the book series World Perspectives which published a variety of books on philosophy.
Quantum physics
There has been much subsequent debate and discussion about Bohr's views and philosophy of quantum mechanics. Regarding his ontological interpretation of the quantum world, Bohr has been seen as an anti-realist, an instrumentalist, a phenomenological realist or some other kind of realist. Furthermore, though some have seen Bohr as being a subjectivist or a positivist, most philosophers agree that this is a misunderstanding of Bohr as he never argued for verificationism or for the idea that the subject had a direct impact on the outcome of a measurement.
Bohr has often been quoted saying that there is "no quantum world" but only an "abstract quantum physical description". This was not publicly said by Bohr, but rather a private statement attributed to Bohr by Aage Petersen in a reminiscence after his death. N. David Mermin recalled Victor Weisskopf declaring that Bohr wouldn't have said anything of the sort and exclaiming, "Shame on Aage Petersen for putting those ridiculous words in Bohr's mouth!"
Numerous scholars have argued that the philosophy of Immanuel Kant had a strong influence on Bohr. Like Kant, Bohr thought distinguishing between the subject's experience and the object was an important condition for attaining knowledge. This can only be done through the use of causal and spatial-temporal concepts to describe the subject's experience. Thus, according to Jan Faye, Bohr thought that it is because of "classical" concepts like "space", "position", "time", "causation", and "momentum" that one can talk about objects and their objective existence. Bohr held that basic concepts like "time" are built in to our ordinary language and that the concepts of classical physics are merely a refinement of them. Therefore, for Bohr, classical concepts need to be used to describe experiments that deal with the quantum world. Bohr writes:
[T]he account of all evidence must be expressed in classical terms. The argument is simply that by the word 'experiment' we refer to a situation where we can tell to others what we have done and what we have learned and that, therefore, the account of the experimental arrangement and of the results of the observations must be expressed in unambiguous language with suitable application of the terminology of classical physics (APHK, p. 39).
According to Faye, there are various explanations for why Bohr believed that classical concepts were necessary for describing quantum phenomena. Faye groups explanations into five frameworks: empiricism (i.e. logical positivism); Kantianism (or Neo-Kantian models of epistemology); Pragmatism (which focus on how human beings experientially interact with atomic systems according to their needs and interests); Darwinianism (i.e. we are adapted to use classical type concepts, which Léon Rosenfeld said that we evolved to use); and Experimentalism (which focuses strictly on the function and outcome of experiments that thus must be described classically). These explanations are not mutually exclusive, and at times Bohr seems to emphasise some of these aspects while at other times he focuses on other elements.
According to Faye "Bohr thought of the atom as real. Atoms are neither heuristic nor logical constructions." However, according to Faye, he did not believe "that the quantum mechanical formalism was true in the sense that it gave us a literal ('pictorial') rather than a symbolic representation of the quantum world." Therefore, Bohr's theory of complementarity "is first and foremost a semantic and epistemological reading of quantum mechanics that carries certain ontological implications". As Faye explains, Bohr's indefinability thesis is that
[T]he truth conditions of sentences ascribing a certain kinematic or dynamic value to an atomic object are dependent on the apparatus involved, in such a way that these truth conditions have to include reference to the experimental setup as well as the actual outcome of the experiment.
Faye notes that Bohr's interpretation makes no reference to a "collapse of the wave function during measurements" (and indeed, he never mentioned this idea). Instead, Bohr "accepted the Born statistical interpretation because he believed that the ψ-function has only a symbolic meaning and does not represent anything real". Since for Bohr, the ψ-function is not a literal pictorial representation of reality, there can be no real collapse of the wavefunction.
A much debated point in recent literature is what Bohr believed about atoms and their reality and whether they are something else than what they seem to be. Some like Henry Folse argue that Bohr saw a distinction between observed phenomena and a transcendental reality. Jan Faye disagrees with this position and holds that for Bohr, the quantum formalism and complementarity was the only thing we could say about the quantum world and that "there is no further evidence in Bohr's writings indicating that Bohr would attribute intrinsic and measurement-independent state properties to atomic objects [...] in addition to the classical ones being manifested in measurement."
Second World War
Assistance to refugee scholars
The rise of Nazism in Germany prompted many scholars to flee their countries, either because they were Jewish or because they were political opponents of the Nazi regime. In 1933, the Rockefeller Foundation created a fund to help support refugee academics, and Bohr discussed this programme with the President of the Rockefeller Foundation, Max Mason, in May 1933 during a visit to the United States. Bohr offered the refugees temporary jobs at the institute, provided them with financial support, arranged for them to be awarded fellowships from the Rockefeller Foundation, and ultimately found them places at institutions around the world. Those that he helped included Guido Beck, Felix Bloch, James Franck, George de Hevesy, Otto Frisch, Hilde Levi, Lise Meitner, George Placzek, Eugene Rabinowitch, Stefan Rozental, Erich Ernst Schneider, Edward Teller, Arthur von Hippel and Victor Weisskopf.
In April 1940, early in the Second World War, Nazi Germany invaded and occupied Denmark. To prevent the Germans from discovering Max von Laue's and James Franck's gold Nobel medals, Bohr had de Hevesy dissolve them in aqua regia. In this form, they were stored on a shelf at the Institute until after the war, when the gold was precipitated and the medals re-struck by the Nobel Foundation. Bohr's own medal had been donated to an auction to the Finnish Relief Fund, and was auctioned off in March 1940, along with the medal of August Krogh. The buyer later donated the two medals to the Danish Historical Museum in Frederiksborg Castle, where they are still kept, although Bohr's medal temporarily went to space with Andreas Mogensen on ISS Expedition 70 in 2023-2024.
Bohr kept the Institute running, but all the foreign scholars departed.
Meeting with Heisenberg
Bohr was aware of the possibility of using uranium-235 to construct an atomic bomb, referring to it in lectures in Britain and Denmark shortly before and after the war started, but he did not believe that it was technically feasible to extract a sufficient quantity of uranium-235. In September 1941, Heisenberg, who had become head of the German nuclear energy project, visited Bohr in Copenhagen. During this meeting the two men took a private moment outside, the content of which has caused much speculation, as both gave differing accounts.
According to Heisenberg, he began to address nuclear energy, morality and the war, to which Bohr seems to have reacted by terminating the conversation abruptly while not giving Heisenberg hints about his own opinions. Ivan Supek, one of Heisenberg's students and friends, claimed that the main subject of the meeting was Carl Friedrich von Weizsäcker, who had proposed trying to persuade Bohr to mediate peace between Britain and Germany.
In 1957, Heisenberg wrote to Robert Jungk, who was then working on the book Brighter than a Thousand Suns: A Personal History of the Atomic Scientists. Heisenberg explained that he had visited Copenhagen to communicate to Bohr the views of several German scientists, that production of a nuclear weapon was possible with great efforts, and this raised enormous responsibilities on the world's scientists on both sides. When Bohr saw Jungk's depiction in the Danish translation of the book, he drafted (but never sent) a letter to Heisenberg, stating that he deeply disagreed with Heisenberg's account of the meeting, that he recalled Heisenberg's visit as being to encourage cooperation with the inevitably victorious Nazis and that he was shocked that Germany was pursuing nuclear weapons under Heisenberg's leadership.
Michael Frayn's 1998 play Copenhagen explores what might have happened at the 1941 meeting between Heisenberg and Bohr. A television film version of the play by the BBC was first screened on 26 September 2002, with Stephen Rea as Bohr. With the subsequent release of Bohr's letters, the play has been criticised by historians as being a "grotesque oversimplification and perversion of the actual moral balance" due to adopting a pro-Heisenberg perspective.
The same meeting had previously been dramatised by the BBC's Horizon science documentary series in 1992, with Anthony Bate as Bohr, and Philip Anthony as Heisenberg. The meeting is also dramatised in the Norwegian/Danish/British miniseries The Heavy Water War.
Manhattan Project
In September 1943, word reached Bohr and his brother Harald that the Nazis considered their family to be Jewish, since their mother was Jewish, and that they were therefore in danger of being arrested. The Danish resistance helped Bohr and his wife escape by sea to Sweden on 29 September. The next day, Bohr persuaded King Gustaf V of Sweden to make public Sweden's willingness to provide asylum to Jewish refugees. On 2 October 1943, Swedish radio broadcast that Sweden was ready to offer asylum, and the mass rescue of the Danish Jews by their countrymen followed swiftly thereafter. Some historians claim that Bohr's actions led directly to the mass rescue, while others say that, though Bohr did all that he could for his countrymen, his actions were not a decisive influence on the wider events. Eventually, over 7,000 Danish Jews escaped to Sweden.
When the news of Bohr's escape reached Britain, Lord Cherwell sent a telegram to Bohr asking him to come to Britain. Bohr arrived in Scotland on 6 October in a de Havilland Mosquito operated by the British Overseas Airways Corporation (BOAC). The Mosquitos were unarmed high-speed bomber aircraft that had been converted to carry small, valuable cargoes or important passengers. By flying at high speed and high altitude, they could cross German-occupied Norway, and yet avoid German fighters. Bohr, equipped with parachute, flying suit and oxygen mask, spent the three-hour flight lying on a mattress in the aircraft's bomb bay. During the flight, Bohr did not wear his flying helmet as it was too small, and consequently did not hear the pilot's intercom instruction to turn on his oxygen supply when the aircraft climbed to high altitude to overfly Norway. He passed out from oxygen starvation and only revived when the aircraft descended to lower altitude over the North Sea. Bohr's son Aage followed his father to Britain on another flight a week later, and became his personal assistant.
Bohr was warmly received by James Chadwick and Sir John Anderson, but for security reasons Bohr was kept out of sight. He was given an apartment at St James's Palace and an office with the British Tube Alloys nuclear weapons development team. Bohr was astonished at the amount of progress that had been made. Chadwick arranged for Bohr to visit the United States as a Tube Alloys consultant, with Aage as his assistant. On 8 December 1943, Bohr arrived in Washington, D.C., where he met with the director of the Manhattan Project, Brigadier General Leslie R. Groves Jr. He visited Einstein and Pauli at the Institute for Advanced Study in Princeton, New Jersey, and went to Los Alamos in New Mexico, where the nuclear weapons were being designed. For security reasons, he went under the name of "Nicholas Baker" in the United States, while Aage became "James Baker". In May 1944 the Danish resistance newspaper De frie Danske reported that they had learned that 'the famous son of Denmark Professor Niels Bohr' in October the previous year had fled his country via Sweden to London and from there travelled to Moscow from where he could be assumed to support the war effort.
Bohr did not remain at Los Alamos, but paid a series of extended visits over the course of the next two years. Robert Oppenheimer credited Bohr with acting "as a scientific father figure to the younger men", most notably Richard Feynman. Bohr is quoted as saying, "They didn't need my help in making the atom bomb." Oppenheimer gave Bohr credit for an important contribution to the work on modulated neutron initiators. "This device remained a stubborn puzzle", Oppenheimer noted, "but in early February 1945 Niels Bohr clarified what had to be done".
Bohr recognised early that nuclear weapons would change international relations. In April 1944, he received a letter from Peter Kapitza, written some months before when Bohr was in Sweden, inviting him to come to the Soviet Union. The letter convinced Bohr that the Soviets were aware of the Anglo-American project, and would strive to catch up. He sent Kapitza a non-committal response, which he showed to the authorities in Britain before posting. Bohr met Churchill on 16 May 1944, but found that "we did not speak the same language". Churchill disagreed with the idea of openness towards the Russians to the point that he wrote in a letter: "It seems to me Bohr ought to be confined or at any rate made to see that he is very near the edge of mortal crimes."
Oppenheimer suggested that Bohr visit President Franklin D. Roosevelt to convince him that the Manhattan Project should be shared with the Soviets in the hope of speeding up its results. Bohr's friend, Supreme Court Justice Felix Frankfurter, informed President Roosevelt about Bohr's opinions, and a meeting between them took place on 26 August 1944. Roosevelt suggested that Bohr return to the United Kingdom to try to win British approval. When Churchill and Roosevelt met at Hyde Park on 19 September 1944, they rejected the idea of informing the world about the project, and the aide-mémoire of their conversation contained a rider that "enquiries should be made regarding the activities of Professor Bohr and steps taken to ensure that he is responsible for no leakage of information, particularly to the Russians".
In June 1950, Bohr addressed an "Open Letter" to the United Nations calling for international cooperation on nuclear energy. In the 1950s, after the Soviet Union's first nuclear weapon test, the International Atomic Energy Agency was created along the lines of Bohr's suggestion. In 1957 he received the first ever Atoms for Peace Award.
Later years
Following the ending of the war, Bohr returned to Copenhagen on 25 August 1945, and was re-elected President of the Royal Danish Academy of Arts and Sciences on 21 September. At a memorial meeting of the Academy on 17 October 1947 for King Christian X, who had died in April, the new king, Frederik IX, announced that he was conferring the Order of the Elephant on Bohr. This award was normally awarded only to royalty and heads of state, but the king said that it honoured not just Bohr personally, but Danish science. Bohr designed his own coat of arms, which featured a taijitu (symbol of yin and yang) and a motto in , "opposites are complementary".
The Second World War demonstrated that science, and physics in particular, now required considerable financial and material resources. To avoid a brain drain to the United States, twelve European countries banded together to create CERN, a research organisation along the lines of the national laboratories in the United States, designed to undertake Big Science projects beyond the resources of any one of them alone. Questions soon arose regarding the best location for the facilities. Bohr and Kramers felt that the Institute in Copenhagen would be the ideal site. Pierre Auger, who organised the preliminary discussions, disagreed; he felt that both Bohr and his Institute were past their prime, and that Bohr's presence would overshadow others. After a long debate, Bohr pledged his support to CERN in February 1952, and Geneva was chosen as the site in October. The CERN Theory Group was based in Copenhagen until their new accommodation in Geneva was ready in 1957. Victor Weisskopf, who later became the Director General of CERN, summed up Bohr's role, saying that "there were other personalities who started and conceived the idea of CERN. The enthusiasm and ideas of the other people would not have been enough, however, if a man of his stature had not supported it."
Meanwhile, Scandinavian countries formed the Nordic Institute for Theoretical Physics in 1957, with Bohr as its chairman. He was also involved with the founding of the Research Establishment Risø of the Danish Atomic Energy Commission, and served as its first chairman from February 1956.
Bohr died of heart failure at his home in Carlsberg on 18 November 1962. He was cremated, and his ashes were buried in the family plot in the Assistens Cemetery in the Nørrebro section of Copenhagen, along with those of his parents, his brother Harald, and his son Christian. Years later, his wife's ashes were also interred there. On 7 October 1965, on what would have been his 80th birthday, the Institute for Theoretical Physics at the University of Copenhagen was officially renamed to what it had been called unofficially for many years: the Niels Bohr Institute.
Accolades
Bohr received numerous honours and accolades. In addition to the Nobel Prize, he received the Hughes Medal in 1921, the Matteucci Medal in 1923, the Franklin Medal in 1926, the Copley Medal in 1938, the Order of the Elephant in 1947, the Atoms for Peace Award in 1957 and the Sonning Prize in 1961. He became foreign member of the Finnish Society of Sciences an Letters in 1922, and of the Royal Netherlands Academy of Arts and Sciences in 1923, an international member of the United States National Academy of Sciences in 1925, a member of the Royal Society in 1926, an international member of the American Philosophical Society in 1940, and an international honorary member of the American Academy of Arts and Sciences in 1945. The Bohr model's semicentennial was commemorated in Denmark on 21 November 1963 with a postage stamp depicting Bohr, the hydrogen atom and the formula for the difference of any two hydrogen energy levels: . Several other countries have also issued postage stamps depicting Bohr. In 1997, the Danish National Bank began circulating the 500-krone banknote with the portrait of Bohr smoking a pipe. On 7 October 2012, in celebration of Niels Bohr's 127th birthday, a Google Doodle depicting the Bohr model of the hydrogen atom appeared on Google's home page. An asteroid, 3948 Bohr, was named after him, as was the Bohr lunar crater, and bohrium, the chemical element with atomic number 107, in acknowledgement of his work on the structure of atoms.
Bibliography
See also
Einstein–Podolsky–Rosen paradox
Notes
References
Excerpted from:
(Previously published by John Wiley & Sons in 1964)
Further reading
Bohr's researches on reaction times.
External links
Niels Bohr Archive
Author profile in the database zbMATH
including the Nobel Lecture, 11 December 1922 The Structure of the Atom
Oral history interview transcript for Niels Bohr on 31 October 1962, American Institute of Physics, Niels Bohr Library & Archives – interviews conducted by Thomas S. Kuhn, Leon Rosenfeld, Erik Rudinger, and Aage Petersen
Oral history interview transcript for Niels Bohr on 1 November 1962, American Institute of Physics, Niels Bohr Library & Archives
Oral history interview transcript for Niels Bohr on 7 November 1962, American Institute of Physics, Niels Bohr Library & Archives
Oral history interview transcript for Niels Bohr on 14 November 1962, American Institute of Physics, Niels Bohr Library & Archives
Oral history interview transcript for Niels Bohr on 17 November 1962, American Institute of Physics, Niels Bohr Library & Archives
1885 births
1962 deaths
20th-century Danish philosophers
Nobel laureates in Physics
Danish Nobel laureates
Jewish physicists
Academics of the Victoria University of Manchester
Akademisk Boldklub players
Alumni of Trinity College, Cambridge
Men's association football goalkeepers
Atoms for Peace Award recipients
Niels
Corresponding Members of the Russian Academy of Sciences (1917–1925)
Corresponding Members of the USSR Academy of Sciences
Danish atheists
Jewish atheists
Danish Jews
Danish expatriates in England
Danish expatriates in the United States
Danish men's footballers
Jewish footballers
Danish nuclear physicists
Danish people of Jewish descent
Danish people of World War II
Jewish Nobel laureates
Jewish philosophers
20th-century Danish physicists
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Foreign fellows of the Indian National Science Academy
Grand Crosses of the Order of the Dannebrog
Honorary members of the USSR Academy of Sciences
Institute for Advanced Study visiting scholars
Manhattan Project people
Members of the Pontifical Academy of Sciences
Members of the Prussian Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the German National Academy of Sciences Leopoldina
Members of the German Academy of Sciences at Berlin
Niels Bohr International Gold Medal recipients
People from Gribskov Municipality
People associated with CERN
People associated with the nuclear weapons programme of the United Kingdom
Philosophers of science
Quantum physicists
Recipients of the Copley Medal
Recipients of the Pour le Mérite (civil class)
Scientists from Copenhagen
Theoretical physicists
University of Copenhagen alumni
Winners of the Max Planck Medal
Burials at Assistens Cemetery (Copenhagen)
Recipients of the Matteucci Medal
Manchester Literary and Philosophical Society
Members of the Göttingen Academy of Sciences and Humanities
Members of the American Philosophical Society
Recipients of Franklin Medal | Niels Bohr | [
"Physics"
] | 9,360 | [
"Theoretical physics",
"Quantum physicists",
"Theoretical physicists",
"Quantum mechanics"
] |
21,285 | https://en.wikipedia.org/wiki/Nuclear%20physics | Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions, in addition to the study of other forms of nuclear matter.
Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, including its electrons.
Discoveries in nuclear physics have led to applications in many fields. This includes nuclear power, nuclear weapons, nuclear medicine and magnetic resonance imaging, industrial and agricultural isotopes, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology. Such applications are studied in the field of nuclear engineering.
Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of nuclear physics to astrophysics, is crucial in explaining the inner workings of stars and the origin of the chemical elements.
History
The history of nuclear physics as a discipline distinct from atomic physics, starts with the discovery of radioactivity by Henri Becquerel in 1896, made while investigating phosphorescence in uranium salts. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure. At the beginning of the 20th century the accepted model of the atom was J. J. Thomson's "plum pudding" model in which the atom was a positively charged ball with smaller negatively charged electrons embedded inside it.
In the years that followed, radioactivity was extensively investigated, notably by Marie Curie, a Polish physicist whose maiden name was Sklodowska, Pierre Curie, Ernest Rutherford and others. By the turn of the century, physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a continuous range of energies, rather than the discrete amounts of energy that were observed in gamma and alpha decays. This was a problem for nuclear physics at the time, because it seemed to indicate that energy was not conserved in these decays.
The 1903 Nobel Prize in Physics was awarded jointly to Becquerel, for his discovery and to Marie and Pierre Curie for their subsequent research into radioactivity. Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his "investigations into the disintegration of the elements and the chemistry of radioactive substances".
In 1905, Albert Einstein formulated the idea of mass–energy equivalence. While the work on radioactivity by Becquerel and Marie Curie predates this, an explanation of the source of the energy of radioactivity would have to wait for the discovery that the nucleus itself was composed of smaller constituents, the nucleons.
Rutherford discovers the nucleus
In 1906, Ernest Rutherford published "Retardation of the α Particle from Radium in passing through matter." Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, and further greatly expanded work was published in 1910 by Geiger. In 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it.
Published in 1909, with the eventual classical analysis by Rutherford published May 1911, the key preemptive experiment was performed during 1909, at the University of Manchester. Ernest Rutherford's assistant, Professor Johannes "Hans" Geiger, and an undergraduate, Marsden, performed an experiment in which Geiger and Marsden under Rutherford's supervision fired alpha particles (helium 4 nuclei) at a thin film of gold foil. The plum pudding model had predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe: a few particles were scattered through large angles, even completely backwards in some cases. He likened it to firing a bullet at tissue paper and having it bounce off. The discovery, with Rutherford's analysis of the data in 1911, led to the Rutherford model of the atom, in which the atom had a very small, very dense nucleus containing most of its mass, and consisting of heavy positively charged particles with embedded electrons in order to balance out the charge (since the neutron was unknown). As an example, in this model (which is not the modern one) nitrogen-14 consisted of a nucleus with 14 protons and 7 electrons (21 total particles) and the nucleus was surrounded by 7 more orbiting electrons.
Eddington and stellar nuclear fusion
Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.
Studies of nuclear spin
The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons each had a spin of . In the Rutherford model of nitrogen-14, 20 of the total 21 nuclear particles should have paired up to cancel each other's spin, and the final odd particle should have left the nucleus with a net spin of . Rasetti discovered, however, that nitrogen-14 had a spin of 1.
James Chadwick discovers the neutron
In 1932 Chadwick realized that radiation that had been observed by Walther Bothe, Herbert Becker, Irène and Frédéric Joliot-Curie was actually due to a neutral particle of about the same mass as the proton, that he called the neutron (following a suggestion from Rutherford about the need for such a particle). In the same year Dmitri Ivanenko suggested that there were no electrons in the nucleus — only protons and neutrons — and that neutrons were spin particles, which explained the mass not due to protons. The neutron spin immediately solved the problem of the spin of nitrogen-14, as the one unpaired proton and one unpaired neutron in this model each contributed a spin of in the same direction, giving a final total spin of 1.
With the discovery of the neutron, scientists could at last calculate what fraction of binding energy each nucleus had, by comparing the nuclear mass with that of the protons and neutrons which composed it. Differences between nuclear masses were calculated in this way. When nuclear reactions were measured, these were found to agree with Einstein's calculation of the equivalence of mass and energy to within 1% as of 1934.
Proca's equations of the massive vector boson field
Alexandru Proca was the first to develop and report the massive vector boson field equations and a theory of the mesonic field of nuclear forces. Proca's equations were known to Wolfgang Pauli who mentioned the equations in his Nobel address, and they were also known to Yukawa, Wentzel, Taketani, Sakata, Kemmer, Heitler, and Fröhlich who appreciated the content of Proca's equations for developing a theory of the atomic nuclei in Nuclear Physics.
Yukawa's meson postulated to bind nuclei
In 1935 Hideki Yukawa proposed the first significant theory of the strong force to explain how the nucleus holds together. In the Yukawa interaction a virtual particle, later called a meson, mediated a force between all nucleons, including protons and neutrons. This force explained why nuclei did not disintegrate under the influence of proton repulsion, and it also gave an explanation of why the attractive strong force had a more limited range than the electromagnetic repulsion between protons. Later, the discovery of the pi meson showed it to have the properties of Yukawa's particle.
With Yukawa's papers, the modern model of the atom was complete. The center of the atom contains a tight ball of neutrons and protons, which is held together by the strong nuclear force, unless it is too large. Unstable nuclei may undergo alpha decay, in which they emit an energetic helium nucleus, or beta decay, in which they eject an electron (or positron). After one of these decays the resultant nucleus may be left in an excited state, and in this case it decays to its ground state by emitting high-energy photons (gamma decay).
The study of the strong and weak nuclear forces (the latter explained by Enrico Fermi via Fermi's interaction in 1934) led physicists to collide nuclei and electrons at ever higher energies. This research became the science of particle physics, the crown jewel of which is the standard model of particle physics, which describes the strong, weak, and electromagnetic forces.
Modern nuclear physics
A heavy nucleus can contain hundreds of nucleons. This means that with some approximation it can be treated as a classical system, rather than a quantum-mechanical one. In the resulting liquid-drop model, the nucleus has an energy that arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission.
Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model, developed in large part by Maria Goeppert Mayer and J. Hans D. Jensen. Nuclei with certain "magic" numbers of neutrons and protons are particularly stable, because their shells are filled.
Other more complicated models for the nucleus have also been proposed, such as the interacting boson model, in which pairs of neutrons and protons interact as bosons.
Ab initio methods try to solve the nuclear many-body problem from the ground up, starting from the nucleons and their interactions.
Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls or even pears) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator. Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark–gluon plasma, in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons.
Nuclear decay
Eighty elements have at least one stable isotope which is never observed to decay, amounting to a total of about 251 stable nuclides. However, thousands of isotopes have been characterized as unstable. These "radioisotopes" decay over time scales ranging from fractions of a second to trillions of years. Plotted on a chart as a function of atomic and neutron numbers, the binding energy of the nuclides forms what is known as the valley of stability. Stable nuclides lie along the bottom of this energy valley, while increasingly unstable nuclides lie up the valley walls, that is, have weaker binding energy.
The most stable nuclei fall within certain ranges or balances of composition of neutrons and protons: too few or too many neutrons (in relation to the number of protons) will cause it to decay. For example, in beta decay, a nitrogen-16 atom (7 protons, 9 neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is converted by the weak interaction into a proton, an electron and an antineutrino. The element is transmuted to another element, with a different number of protons.
In alpha decay, which typically occurs in the heaviest nuclei, the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4. In many cases this process continues through several steps of this kind, including other types of decays (usually beta decay) until a stable element is formed.
In gamma decay, a nucleus decays from an excited state into a lower energy state, by emitting a gamma ray. The element is not changed to another element in the process (no nuclear transmutation is involved).
Other more exotic decays are possible (see the first main article). For example, in internal conversion decay, the energy from an excited nucleus may eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons but is not beta decay and (unlike beta decay) does not transmute one element to another.
Nuclear fusion
In nuclear fusion, two low-mass nuclei come into very close contact with each other so that the strong force fuses them. It requires a large amount of energy for the strong or nuclear forces to overcome the electrical repulsion between the nuclei in order to fuse them; therefore nuclear fusion can only take place at very high temperatures or high pressures. When nuclei fuse, a very large amount of energy is released and the combined nucleus assumes a lower energy level. The binding energy per nucleon increases with mass number up to nickel-62. Stars like the Sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. A frontier in current research at various institutions, for example the Joint European Torus (JET) and ITER, is the development of an economically viable method of using energy from a controlled fusion reaction. Nuclear fusion is the origin of the energy (including in the form of light and other electromagnetic radiation) produced by the core of all stars including our own Sun.
Nuclear fission
Nuclear fission is the reverse process to fusion. For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones.
The process of alpha decay is in essence a special type of spontaneous nuclear fission. It is a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely.
From several of the heaviest nuclei whose fission produces free neutrons, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a chain reaction. Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions. The fission or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear power plants and fission-type nuclear bombs, such as those detonated in Hiroshima and Nagasaki, Japan, at the end of World War II. Heavy nuclei such as uranium and thorium may also undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay.
For a neutron-initiated chain reaction to occur, there must be a critical mass of the relevant isotope present in a certain space under certain conditions. The conditions for the smallest critical mass require the conservation of the emitted neutrons and also their slowing or moderation so that there is a greater cross-section or probability of them initiating another fission. In two regions of Oklo, Gabon, Africa, natural nuclear fission reactors were active over 1.5 billion years ago. Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain reactions.
Production of "heavy" elements
According to the theory, as the Universe cooled after the Big Bang it eventually became possible for common subatomic particles as we know them (neutrons, protons and electrons) to exist. The most common particles created in the Big Bang which are still easily observable to us today were protons and electrons (in equal numbers). The protons would eventually form hydrogen atoms. Almost all the neutrons created in the Big Bang were absorbed into helium-4 in the first three minutes after the Big Bang, and this helium accounts for most of the helium in the universe today (see Big Bang nucleosynthesis).
Some relatively small quantities of elements beyond helium (lithium, beryllium, and perhaps some boron) were created in the Big Bang, as the protons and neutrons collided with each other, but all of the "heavier elements" (carbon, element number 6, and elements of greater atomic number) that we see today, were created inside stars during a series of fusion stages, such as the proton–proton chain, the CNO cycle and the triple-alpha process. Progressively heavier elements are created during the evolution of a star.
Energy is only released in fusion processes involving smaller atoms than iron because the binding energy per nucleon peaks around iron (56 nucleons). Since the creation of heavier nuclei by fusion requires energy, nature resorts to the process of neutron capture. Neutrons (due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s-process) or the rapid, or r-process. The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r-process is thought to occur in supernova explosions, which provide the necessary conditions of high temperature, high neutron flux and ejected matter. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers).
See also
Isomeric shift
Neutron-degenerate matter
Nuclear chemistry
Nuclear matter
Nuclear model
Nuclear spectroscopy
Nuclear structure
Nucleonica, web driven nuclear science portal
QCD matter
References
Bibliography
Introductory
Semat, H. and Albright, John R. (1972). Introduction to Atomic and Nuclear Physics. Springer. ISBN 978-0-412-15670-0.
Littlefield, T.A. and Thorley, N. (1979) Atomic and Nuclear Physics: An Introduction. Springer US. ISBN 978-0-442-30190-3.
Reference works
Advanced
Cohen, Bernard L, (1971). Concepts of Nuclear Physics, McGraw-Hill, Inc.
Greiner, Walter; Maruhn, Joachim A. and Bromley, D.A (1996) Nuclear Models. Springer ISBN 9783540591801. *
Classics or Historic
Fermi, E. (1950). Nuclear Physics. Univ. Chicago Press
External links
Ernest Rutherford's biography at the American Institute of Physics
American Physical Society Division of Nuclear Physics
American Nuclear Society
Annotated bibliography on nuclear physics from the Alsos Digital Library for Nuclear Issues
Nuclear science wiki
Nuclear Data Services – IAEA
Nuclear Physics , BBC Radio 4 discussion with Jim Al-Khalili, John Gribbin and Catherine Sutton (In Our Time, Jan. 10, 2002) | Nuclear physics | [
"Physics"
] | 4,057 | [
"Nuclear physics"
] |
21,488 | https://en.wikipedia.org/wiki/Nanotechnology | Nanotechnology is the manipulation of matter with at least one dimension sized from 1 to 100 nanometers (nm). At this scale, commonly known as the nanoscale, surface area and quantum mechanical effects become important in describing properties of matter. This definition of nanotechnology includes all types of research and technologies that deal with these special properties. It is common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to research and applications whose common trait is scale. An earlier understanding of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabricating macroscale products, now referred to as molecular nanotechnology.
Nanotechnology defined by scale includes fields of science such as surface science, organic chemistry, molecular biology, semiconductor physics, energy storage, engineering, microfabrication, and molecular engineering. The associated research and applications range from extensions of conventional device physics to molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.
Nanotechnology may be able to create new materials and devices with diverse applications, such as in nanomedicine, nanoelectronics, biomaterials energy production, and consumer products. However, nanotechnology raises issues, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.
Origins
The concepts that seeded nanotechnology were first discussed in 1959 by physicist Richard Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms.
The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known. Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" that would be able to build a copy of itself and of other items of arbitrary complexity with atom-level control. Also in 1986, Drexler co-founded The Foresight Institute to increase public awareness and understanding of nanotechnology concepts and implications.
The emergence of nanotechnology as a field in the 1980s occurred through the convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework, and high-visibility experimental advances that drew additional attention to the prospects. In the 1980s, two breakthroughs sparked the growth of nanotechnology. First, the invention of the scanning tunneling microscope in 1981 enabled visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986. Binnig, Quate and Gerber also invented the analogous atomic force microscope that year.
Second, fullerenes (buckyballs) were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry. C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related carbon nanotubes (sometimes called graphene tubes or Bucky tubes) which suggested potential applications for nanoscale electronics and devices. The discovery of carbon nanotubes is largely attributed to Sumio Iijima of NEC in 1991, for which Iijima won the inaugural 2008 Kavli Prize in Nanoscience.
In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology. Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.
Meanwhile, commercial products based on advancements in nanoscale technologies began emerging. These products were limited to bulk applications of nanomaterials and did not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based sunscreens, carbon fiber strengthening using silica nanoparticles, and carbon nanotubes for stain-resistant textiles.
Governments moved to promote and fund research into nanotechnology, such as American the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established research funding, and in Europe via the European Framework Programmes for Research and Technological Development.
By the mid-2000s scientific attention began to flourish. Nanotechnology roadmaps centered on atomically precise manipulation of matter and discussed existing and projected capabilities, goals, and applications.
Fundamental concepts
Nanotechnology is the science and engineering of functional systems at the molecular scale. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up making complete, high-performance products.
One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon–carbon bond lengths, or the spacing between these atoms in a molecule, are in the range , and DNA's diameter is around 2 nm. On the other hand, the smallest cellular life forms, the bacteria of the genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology is taken as the scale range , following the definition used by the American National Nanotechnology Initiative. The lower limit is set by the size of atoms (hydrogen has the smallest atoms, which have an approximately ,25 nm kinetic diameter). The upper limit is more or less arbitrary, but is around the size below which phenomena not observed in larger structures start to become apparent and can be made use of. These phenomena make nanotechnology distinct from devices that are merely miniaturized versions of an equivalent macroscopic device; such devices are on a larger scale and come under the description of microtechnology.
To put that scale in another context, the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth.
Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control.
Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and nanoionics have evolved to provide nanotechnology's scientific foundation.
Larger to smaller: a materials perspective
Several phenomena become pronounced as system size. These include statistical mechanical effects, as well as quantum mechanical effects, for example, the "quantum size effect" in which the electronic properties of solids alter along with reductions in particle size. Such effects do not apply at macro or micro dimensions. However, quantum effects can become significant when nanometer scales. Additionally, physical (mechanical, electrical, optical, etc.) properties change versus macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal, and catalytic properties of materials. Diffusion and reactions can be different as well. Systems with fast ion transport are referred to as nanoionics. The mechanical properties of nanosystems are of interest in research.
Simple to complex: a molecular perspective
Modern synthetic chemistry can prepare small molecules of almost any structure. These methods are used to manufacture a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble single molecules into supramolecular assemblies consisting of many molecules arranged in a well-defined manner.
These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into a useful conformation through a bottom-up approach. The concept of molecular recognition is important: molecules can be designed so that a specific configuration or arrangement is favored due to non-covalent intermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme targeting a single substrate, or the specific folding of a protein. Thus, components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.
Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, many examples of self-assembly based on molecular recognition in exist in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions.
Molecular nanotechnology: a long-term view
Molecular nanotechnology, sometimes called molecular manufacturing, concerns engineered nanosystems (nanoscale machines) operating on the molecular scale. Molecular nanotechnology is especially associated with molecular assemblers, machines that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.
When Drexler independently coined and popularized the term "nanotechnology", he envisioned manufacturing technology based on molecular machine systems. The premise was that molecular-scale biological analogies of traditional machine components demonstrated molecular machines were possible: biology was full of examples of sophisticated, stochastically optimized biological machines.
Drexler and other researchers have proposed that advanced nanotechnology ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification. The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems: Molecular Machinery, Manufacturing, and Computation.
In general, assembling devices on the atomic scale requires positioning atoms on other atoms of comparable size and stickiness. Carlo Montemagno's view is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Richard Smalley argued that mechanosynthesis was impossible due to difficulties in mechanically manipulating individual molecules.
This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003. Though biology clearly demonstrates that molecular machines are possible, non-biological molecular machines remained in their infancy. Alex Zettl and colleagues at Lawrence Berkeley Laboratories and UC Berkeley constructed at least three molecular devices whose motion is controlled via changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator.
Ho and Lee at Cornell University in 1999 used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal and chemically bound the CO to the Fe by applying a voltage.
Research
Nanomaterials
Many areas of science develop or study materials having unique properties arising from their nanoscale dimensions.
Interface and colloid science produced many materials that may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related to nanoionics and nanoelectronics.
Nanoscale materials can be used for bulk applications; most commercial applications of nanotechnology are of this flavor.
Progress has been made in using these materials for medical applications, including tissue engineering, drug delivery, antibacterials and biosensors.
Nanoscale materials such as nanopillars are used in solar cells.
Applications incorporating semiconductor nanoparticles in products such as display technology, lighting, solar cells and biological imaging; see quantum dots.
Bottom-up approaches
The bottom-up approach seeks to arrange smaller components into more complex assemblies.
DNA nanotechnology utilizes Watson–Crick basepairing to construct well-defined structures out of DNA and other nucleic acids.
Approaches from the field of "classical" chemical synthesis (inorganic and organic synthesis) aim at designing molecules with well-defined shape (e.g. bis-peptides).
More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognition in particular, to cause single-molecule components to automatically arrange themselves into some useful conformation.
Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip-pen nanolithography. This technique fits into the larger subfield of nanolithography.
Molecular-beam epitaxy allows for bottom-up assemblies of materials, most notably semiconductor materials commonly used in chip and computing applications, stacks, gating, and nanowire lasers.
Top-down approaches
These seek to create smaller devices by using larger ones to direct their assembly.
Many technologies that descended from conventional solid-state silicon methods for fabricating microprocessors are capable of creating features smaller than 100 nm. Giant magnetoresistance-based hard drives already on the market fit this description, as do atomic layer deposition (ALD) techniques. Peter Grünberg and Albert Fert received the Nobel Prize in Physics in 2007 for their discovery of giant magnetoresistance and contributions to the field of spintronics.
Solid-state techniques can be used to create nanoelectromechanical systems or NEMS, which are related to microelectromechanical systems or MEMS.
Focused ion beams can directly remove material, or even deposit material when suitable precursor gasses are applied at the same time. For example, this technique is used routinely to create sub-100 nm sections of material for analysis in transmission electron microscopy.
Atomic force microscope tips can be used as a nanoscale "write head" to deposit a resist, which is then followed by an etching process to remove material in a top-down method.
Functional approaches
Functional approaches seek to develop useful components without regard to how they might be assembled.
Magnetic assembly for the synthesis of anisotropic superparamagnetic materials such as magnetic nano chains.
Molecular scale electronics seeks to develop molecules with useful electronic properties. These could be used as single-molecule components in a nanoelectronic device, such as rotaxane.
Synthetic chemical methods can be used to create synthetic molecular motors, such as in a so-called nanocar.
Biomimetic approaches
Bionics or biomimicry seeks to apply biological methods and systems found in nature to the study and design of engineering systems and modern technology. Biomineralization is one example of the systems studied.
Bionanotechnology is the use of biomolecules for applications in nanotechnology, including the use of viruses and lipid assemblies. Nanocellulose, a nanopolymer often used for bulk-scale applications, has gained interest owing to its useful properties such as abundance, high aspect ratio, good mechanical properties, renewability, and biocompatibility.
Speculative
These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry could progress. These often take a big-picture view, with more emphasis on societal implications than engineering details.
Molecular nanotechnology is a proposed approach that involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields, and many of its proposed techniques are beyond current capabilities.
Nanorobotics considers self-sufficient machines operating at the nanoscale. There are hopes for applying nanorobots in medicine. Nevertheless, progress on innovative materials and patented methodologies have been demonstrated.
Productive nanosystems are "systems of nanosystems" could produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete (i.e. atomic) nature of matter and the possibility of exponential growth, this stage could form the basis of another industrial revolution. Mihail Roco proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, progressing from passive nanostructures to active nanodevices to complex nanomachines and ultimately to productive nanosystems.
Programmable matter seeks to design materials whose properties can be easily, reversibly and externally controlled though a fusion of information science and materials science.
Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are used only informally.
Dimensionality in nanomaterials
Nanomaterials can be classified in 0D, 1D, 2D and 3D nanomaterials. Dimensionality plays a major role in determining the characteristic of nanomaterials including physical, chemical, and biological characteristics. With the decrease in dimensionality, an increase in surface-to-volume ratio is observed. This indicates that smaller dimensional nanomaterials have higher surface area compared to 3D nanomaterials. Two dimensional (2D) nanomaterials have been extensively investigated for electronic, biomedical, drug delivery and biosensor applications.
Tools and techniques
Scanning microscopes
The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two versions of scanning probes that are used for nano-scale observation. Other types of scanning probe microscopy have much higher resolution, since they are not limited by the wavelengths of sound or light.
The tip of a scanning probe can also be used to manipulate nanostructures (positional assembly). Feature-oriented scanning may be a promising way to implement these nano-scale manipulations via an automatic algorithm. However, this is still a slow process because of low velocity of the microscope.
The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scanning probe microscopy is an important technique both for characterization and synthesis. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules can be moved around on a surface with scanning probe microscopy techniques.
Lithography
Various techniques of lithography, such as optical lithography, X-ray lithography, dip pen lithography, electron beam lithography or nanoimprint lithography offer top-down fabrication techniques where a bulk material is reduced to a nano-scale pattern.
Another group of nano-technological techniques include those used for fabrication of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers.
Bottom-up
In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Dual-polarization interferometry is one tool suitable for characterization of self-assembled thin films. Another variation of the bottom-up approach is molecular-beam epitaxy or MBE. Researchers at Bell Telephone Laboratories including John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE lays down atomically precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.
Therapeutic products based on responsive nanomaterials, such as the highly deformable, stress-sensitive Transfersome vesicles, are approved for human use in some countries.
Applications
As of August 21, 2008, the Project on Emerging Nanotechnologies estimated that over 800 manufacturer-identified nanotech products were publicly available, with new ones hitting the market at a pace of 3–4 per week. Most applications are "first generation" passive nanomaterials that includes titanium dioxide in sunscreen, cosmetics, surface coatings, and some food products; Carbon allotropes used to produce gecko tape; silver in food packaging, clothing, disinfectants, and household appliances; zinc oxide in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide as a fuel catalyst.
In the electric car industry, single wall carbon nanotubes (SWCNTs) address key lithium-ion battery challenges, including energy density, charge rate, service life, and cost. SWCNTs connect electrode particles during charge/discharge process, preventing battery premature degradation. Their exceptional ability to wrap active material particles enhanced electrical conductivity and physical properties, setting them apart multi-walled carbon nanotubes and carbon black.
Further applications allow tennis balls to last longer, golf balls to fly straighter, and bowling balls to become more durable. Trousers and socks have been infused with nanotechnology to last longer and lower temperature in the summer. Bandages are infused with silver nanoparticles to heal cuts faster. Video game consoles and personal computers may become cheaper, faster, and contain more memory thanks to nanotechnology. Also, to build structures for on chip computing with light, for example on chip optical quantum information processing, and picosecond transmission of information.
Nanotechnology may have the ability to make existing medical applications cheaper and easier to use in places like the doctors' offices and at homes. Cars use nanomaterials in such ways that car parts require fewer metals during manufacturing and less fuel to operate in the future.
Nanoencapsulation involves the enclosure of active substances within carriers. Typically, these carriers offer advantages, such as enhanced bioavailability, controlled release, targeted delivery, and protection of the encapsulated substances. In the medical field, nanoencapsulation plays a significant role in drug delivery. It facilitates more efficient drug administration, reduces side effects, and increases treatment effectiveness. Nanoencapsulation is particularly useful for improving the bioavailability of poorly water-soluble drugs, enabling controlled and sustained drug release, and supporting the development of targeted therapies. These features collectively contribute to advancements in medical treatments and patient care.
Nanotechnology may play role in tissue engineering. When designing scaffolds, researchers attempt to mimic the nanoscale features of a cell's microenvironment to direct its differentiation down a suitable lineage. For example, when creating scaffolds to support bone growth, researchers may mimic osteoclast resorption pits.
Researchers used DNA origami-based nanobots capable of carrying out logic functions to target drug delivery in cockroaches.
A nano bible (a .5mm2 silicon chip) was created by the Technion in order to increase youth interest in nanotechnology.
Implications
One concern is the effect that industrial-scale manufacturing and use of nanomaterials will have on human health and the environment, as suggested by nanotoxicology research. For these reasons, some groups advocate that nanotechnology be regulated. However, regulation might stifle scientific research and the development of beneficial innovations. Public health research agencies, such as the National Institute for Occupational Safety and Health research potential health effects stemming from exposures to nanoparticles.
Nanoparticle products may have unintended consequences. Researchers have discovered that bacteriostatic silver nanoparticles used in socks to reduce foot odor are released in the wash. These particles are then flushed into the wastewater stream and may destroy bacteria that are critical components of natural ecosystems, farms, and waste treatment processes.
Public deliberations on risk perception in the US and UK carried out by the Center for Nanotechnology in Society found that participants were more positive about nanotechnologies for energy applications than for health applications, with health applications raising moral and ethical dilemmas such as cost and availability.
Experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, testified that commercialization depends on adequate oversight, risk research strategy, and public engagement. As of 206 Berkeley, California was the only US city to regulate nanotechnology.
Health and environmental concerns
Inhaling airborne nanoparticles and nanofibers may contribute to pulmonary diseases, e.g. fibrosis. Researchers found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response and that nanoparticles induce skin aging through oxidative stress in hairless mice.
A two-year study at UCLA's School of Public Health found lab mice consuming nano-titanium dioxide showed DNA and chromosome damage to a degree "linked to all the big killers of man, namely cancer, heart disease, neurological disease and aging".
A Nature Nanotechnology study suggested that some forms of carbon nanotubes could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully." In the absence of specific regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles in food. A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs.
Regulation
Calls for tighter regulation of nanotechnology have accompanied a debate related to human health and safety risks. Some regulatory agencies cover some nanotechnology products and processes – by "bolting on" nanotechnology to existing regulations – leaving clear gaps. Davies proposed a road map describing steps to deal with these shortcomings.
Andrew Maynard, chief science advisor to the Woodrow Wilson Center's Project on Emerging Nanotechnologies, reported insufficient funding for human health and safety research, and as a result inadequate understanding of human health and safety risks. Some academics called for stricter application of the precautionary principle, slowing marketing approval, enhanced labelling and additional safety data.
A Royal Society report identified a risk of nanoparticles or nanotubes being released during disposal, destruction and recycling, and recommended that "manufacturers of products that fall under extended producer responsibility regimes such as end-of-life regulations publish procedures outlining how these materials will be managed to minimize possible human and environmental exposure".
See also
Carbon nanotube
Electrostatic deflection (molecular physics/nanotechnology)
Energy applications of nanotechnology
Ethics of nanotechnologies
Ion implantation-induced nanoparticle formation
Gold nanoparticle
List of emerging technologies
List of nanotechnology organizations
List of software for nanostructures modeling
Magnetic nanochains
Materiomics
Nano-thermite
Molecular design software
Molecular mechanics
Nanobiotechnology
Nanoelectromechanical relay
Nanoengineering
Nanofluidics
NanoHUB
Nanometrology
Nanoneuronics
Nanoparticle
Nanoscale networks
Nanotechnology education
Nanotechnology in fiction
Nanotechnology in water treatment
Nanoweapons
National Nanotechnology Initiative
Self-assembly of nanoparticles
Top-down and bottom-up
Translational research
Wet nanotechnology
References
External links
What is Nanotechnology? (A Vega/BBC/OU Video Discussion).
1960 introductions
1985 introductions
1986 neologisms
Articles containing video clips | Nanotechnology | [
"Materials_science",
"Engineering"
] | 5,849 | [
"Nanotechnology",
"Materials science"
] |
21,505 | https://en.wikipedia.org/wiki/Nucleotide | Nucleotides are organic molecules composed of a nitrogenous base, a pentose sugar and a phosphate. They serve as monomeric units of the nucleic acid polymers – deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), both of which are essential biomolecules within all life-forms on Earth. Nucleotides are obtained in the diet and are also synthesized from common nutrients by the liver.
Nucleotides are composed of three subunit molecules: a nucleobase, a five-carbon sugar (ribose or deoxyribose), and a phosphate group consisting of one to three phosphates. The four nucleobases in DNA are guanine, adenine, cytosine, and thymine; in RNA, uracil is used in place of thymine.
Nucleotides also play a central role in metabolism at a fundamental, cellular level. They provide chemical energy—in the form of the nucleoside triphosphates, adenosine triphosphate (ATP), guanosine triphosphate (GTP), cytidine triphosphate (CTP), and uridine triphosphate (UTP)—throughout the cell for the many cellular functions that demand energy, including: amino acid, protein and cell membrane synthesis, moving the cell and cell parts (both internally and intercellularly), cell division, etc.. In addition, nucleotides participate in cell signaling (cyclic guanosine monophosphate or cGMP and cyclic adenosine monophosphate or cAMP) and are incorporated into important cofactors of enzymatic reactions (e.g., coenzyme A, FAD, FMN, NAD, and NADP+).
In experimental biochemistry, nucleotides can be radiolabeled using radionuclides to yield radionucleotides.
5-nucleotides are also used in flavour enhancers as food additive to enhance the umami taste, often in the form of a yeast extract.
Structure
A nucleotide is composed of three distinctive chemical sub-units: a five-carbon sugar molecule, a nucleobase (the two of which together are called a nucleoside), and one phosphate group. With all three joined, a nucleotide is also termed a "nucleoside monophosphate", "nucleoside diphosphate" or "nucleoside triphosphate", depending on how many phosphates make up the phosphate group.
In nucleic acids, nucleotides contain either a purine or a pyrimidine base—i.e., the nucleobase molecule, also known as a nitrogenous base—and are termed ribonucleotides if the sugar is ribose, or deoxyribonucleotides if the sugar is deoxyribose. Individual phosphate molecules repetitively connect the sugar-ring molecules in two adjacent nucleotide monomers, thereby connecting the nucleotide monomers of a nucleic acid end-to-end into a long chain. These chain-joins of sugar and phosphate molecules create a 'backbone' strand for a single- or double helix. In any one strand, the chemical orientation (directionality) of the chain-joins runs from the 5'-end to the 3'-end (read: 5 prime-end to 3 prime-end)—referring to the five carbon sites on sugar molecules in adjacent nucleotides. In a double helix, the two strands are oriented in opposite directions, which permits base pairing and complementarity between the base-pairs, all which is essential for replicating or transcribing the encoded information found in DNA.
Nucleic acids then are polymeric macromolecules assembled from nucleotides, the monomer-units of nucleic acids. The purine bases adenine and guanine and pyrimidine base cytosine occur in both DNA and RNA, while the pyrimidine bases thymine (in DNA) and uracil (in RNA) occur in just one. Adenine forms a base pair with thymine with two hydrogen bonds, while guanine pairs with cytosine with three hydrogen bonds.
In addition to being building blocks for the construction of nucleic acid polymers, singular nucleotides play roles in cellular energy storage and provision, cellular signaling, as a source of phosphate groups used to modulate the activity of proteins and other signaling molecules, and as enzymatic cofactors, often carrying out redox reactions. Signaling cyclic nucleotides are formed by binding the phosphate group twice to the same sugar molecule, bridging the 5'- and 3'- hydroxyl groups of the sugar. Some signaling nucleotides differ from the standard single-phosphate group configuration, in having multiple phosphate groups attached to different positions on the sugar. Nucleotide cofactors include a wider range of chemical groups attached to the sugar via the glycosidic bond, including nicotinamide and flavin, and in the latter case, the ribose sugar is linear rather than forming the ring seen in other nucleotides.
Synthesis
Nucleotides can be synthesized by a variety of means, both in vitro and in vivo.
In vitro, protecting groups may be used during laboratory production of nucleotides. A purified nucleoside is protected to create a phosphoramidite, which can then be used to obtain analogues not found in nature and/or to synthesize an oligonucleotide.
In vivo, nucleotides can be synthesized de novo or recycled through salvage pathways. The components used in de novo nucleotide synthesis are derived from biosynthetic precursors of carbohydrate and amino acid metabolism, and from ammonia and carbon dioxide. Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling. The liver is the major organ of de novo synthesis of all four nucleotides. De novo synthesis of pyrimidines and purines follows two different pathways. Pyrimidines are synthesized first from aspartate and carbamoyl-phosphate in the cytoplasm to the common precursor ring structure orotic acid, onto which a phosphorylated ribosyl unit is covalently linked. Purines, however, are first synthesized from the sugar template onto which the ring synthesis occurs. For reference, the syntheses of the purine and pyrimidine nucleotides are carried out by several enzymes in the cytoplasm of the cell, not within a specific organelle. Nucleotides undergo breakdown such that useful parts can be reused in synthesis reactions to create new nucleotides.
Pyrimidine ribonucleotide synthesis
The synthesis of the pyrimidines CTP and UTP occurs in the cytoplasm and starts with the formation of carbamoyl phosphate from glutamine and CO2. Next, aspartate carbamoyltransferase catalyzes a condensation reaction between aspartate and carbamoyl phosphate to form carbamoyl aspartic acid, which is cyclized into 4,5-dihydroorotic acid by dihydroorotase. The latter is converted to orotate by dihydroorotate oxidase. The net reaction is:
(S)-Dihydroorotate + O2 → Orotate + H2O2
Orotate is covalently linked with a phosphorylated ribosyl unit. The covalent linkage between the ribose and pyrimidine occurs at position C1 of the ribose unit, which contains a pyrophosphate, and N1 of the pyrimidine ring. Orotate phosphoribosyltransferase (PRPP transferase) catalyzes the net reaction yielding orotidine monophosphate (OMP):
Orotate + 5-Phospho-α-D-ribose 1-diphosphate (PRPP) → Orotidine 5'-phosphate + Pyrophosphate
Orotidine 5'-monophosphate is decarboxylated by orotidine-5'-phosphate decarboxylase to form uridine monophosphate (UMP). PRPP transferase catalyzes both the ribosylation and decarboxylation reactions, forming UMP from orotic acid in the presence of PRPP. It is from UMP that other pyrimidine nucleotides are derived. UMP is phosphorylated by two kinases to uridine triphosphate (UTP) via two sequential reactions with ATP. First, the diphosphate from UDP is produced, which in turn is phosphorylated to UTP. Both steps are fueled by ATP hydrolysis:
ATP + UMP → ADP + UDP
UDP + ATP → UTP + ADP
CTP is subsequently formed by the amination of UTP by the catalytic activity of CTP synthetase. Glutamine is the NH3 donor and the reaction is fueled by ATP hydrolysis, too:
UTP + Glutamine + ATP + H2O → CTP + ADP + Pi
Cytidine monophosphate (CMP) is derived from cytidine triphosphate (CTP) with subsequent loss of two phosphates.
Purine ribonucleotide synthesis
The atoms that are used to build the purine nucleotides come from a variety of sources:
The de novo synthesis of purine nucleotides by which these precursors are incorporated into the purine ring proceeds by a 10-step pathway to the branch-point intermediate IMP, the nucleotide of the base hypoxanthine. AMP and GMP are subsequently synthesized from this intermediate via separate, two-step pathways. Thus, purine moieties are initially formed as part of the ribonucleotides rather than as free bases.
Six enzymes take part in IMP synthesis. Three of them are multifunctional:
GART (reactions 2, 3, and 5)
PAICS (reactions 6, and 7)
ATIC (reactions 9, and 10)
The pathway starts with the formation of PRPP. PRPS1 is the enzyme that activates R5P, which is formed primarily by the pentose phosphate pathway, to PRPP by reacting it with ATP. The reaction is unusual in that a pyrophosphoryl group is directly transferred from ATP to C1 of R5P and that the product has the α configuration about C1. This reaction is also shared with the pathways for the synthesis of Trp, His, and the pyrimidine nucleotides. Being on a major metabolic crossroad and requiring much energy, this reaction is highly regulated.
In the first reaction unique to purine nucleotide biosynthesis, PPAT catalyzes the displacement of PRPP's pyrophosphate group (PPi) by an amide nitrogen donated from either glutamine (N), glycine (N&C), aspartate (N), folic acid (C1), or CO2. This is the committed step in purine synthesis. The reaction occurs with the inversion of configuration about ribose C1, thereby forming β-5-phosphorybosylamine (5-PRA) and establishing the anomeric form of the future nucleotide.
Next, a glycine is incorporated fueled by ATP hydrolysis, and the carboxyl group forms an amine bond to the NH2 previously introduced. A one-carbon unit from folic acid coenzyme N10-formyl-THF is then added to the amino group of the substituted glycine followed by the closure of the imidazole ring. Next, a second NH2 group is transferred from glutamine to the first carbon of the glycine unit. A carboxylation of the second carbon of the glycin unit is concomitantly added. This new carbon is modified by the addition of a third NH2 unit, this time transferred from an aspartate residue. Finally, a second one-carbon unit from formyl-THF is added to the nitrogen group and the ring is covalently closed to form the common purine precursor inosine monophosphate (IMP).
Inosine monophosphate is converted to adenosine monophosphate in two steps. First, GTP hydrolysis fuels the addition of aspartate to IMP by adenylosuccinate synthase, substituting the carbonyl oxygen for a nitrogen and forming the intermediate adenylosuccinate. Fumarate is then cleaved off forming adenosine monophosphate. This step is catalyzed by adenylosuccinate lyase.
Inosine monophosphate is converted to guanosine monophosphate by the oxidation of IMP forming xanthylate, followed by the insertion of an amino group at C2. NAD+ is the electron acceptor in the oxidation reaction. The amide group transfer from glutamine is fueled by ATP hydrolysis.
Pyrimidine and purine degradation
In humans, pyrimidine rings (C, T, U) can be degraded completely to CO2 and NH3 (urea excretion). That having been said, purine rings (G, A) cannot. Instead, they are degraded to the metabolically inert uric acid which is then excreted from the body. Uric acid is formed when GMP is split into the base guanine and ribose. Guanine is deaminated to xanthine which in turn is oxidized to uric acid. This last reaction is irreversible. Similarly, uric acid can be formed when AMP is deaminated to IMP from which the ribose unit is removed to form hypoxanthine. Hypoxanthine is oxidized to xanthine and finally to uric acid. Instead of uric acid secretion, guanine and IMP can be used for recycling purposes and nucleic acid synthesis in the presence of PRPP and aspartate (NH3 donor).
Prebiotic synthesis of nucleotides
Theories about the origin of life require knowledge of chemical pathways that permit formation of life's key building blocks under plausible prebiotic conditions. The RNA world hypothesis holds that in the primordial soup there existed free-floating ribonucleotides, the fundamental molecules that combine in series to form RNA. Complex molecules like RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for reliable information transfer, and thus Darwinian evolution. Becker et al. showed how pyrimidine nucleosides can be synthesized from small molecules and ribose, driven solely by wet-dry cycles. Purine nucleosides can be synthesized by a similar pathway. 5'-mono- and di-phosphates also form selectively from phosphate-containing minerals, allowing concurrent formation of polyribonucleotides with both the purine and pyrimidine bases. Thus a reaction network towards the purine and pyrimidine RNA building blocks can be established starting from simple atmospheric or volcanic molecules.
Unnatural base pair (UBP)
An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. Examples include d5SICS and dNaM. These artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. E. coli have been induced to replicate a plasmid containing UBPs through multiple generations. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations.
Medical applications of synthetic nucleotides
The applications of synthetic nucleotides vary widely and include disease diagnosis, treatment, or precision medicine.
Antiviral or Antiretroviral agents: several nucleotide derivatives have been used in the treatment against infection with Hepatitis and HIV. Examples of direct nucleoside analog reverse-transcriptase inhibitors (NRTIs) include Tenofovir disoproxil, Tenofovir alafenamide, and Sofosbuvir. On the other hand, agents such as Mericitabine, Lamivudine, Entecavir and Telbivudine must first undergo metabolization via phosphorylation to become activated.
Antisense oligonucleotides (ASO): synthetic oligonucleotides have been used in the treatment of rare heritable diseases since they can bind specific RNA transcripts and ultimately modulate protein expression. Spinal muscular atrophy, amyotrophic lateral sclerosis, homozygous familial hypercholesterolemia, and primary hyperoxaluria type 1 are all amenable to ASO-based therapy. The application of oligonucleotides is a new frontier in precision medicine and management of conditions which are untreatable.
Synthetic guide RNA (gRNA): synthetic nucleotides can be used to design gRNA which are essential for the proper function of gene-editing technologies such as CRISPR-Cas9.
Length unit
Nucleotide (abbreviated "nt") is a common unit of length for single-stranded nucleic acids, similar to how base pair is a unit of length for double-stranded nucleic acids.
Abbreviation codes for degenerate bases
The IUPAC has designated the symbols for nucleotides. Apart from the five (A, G, C, T/U) bases, often degenerate bases are used especially for designing PCR primers. These nucleotide codes are listed here. Some primer sequences may also include the character "I", which codes for the non-standard nucleotide inosine. Inosine occurs in tRNAs and will pair with adenine, cytosine, or thymine. This character does not appear in the following table, however, because it does not represent a degeneracy. While inosine can serve a similar function as the degeneracy "H", it is an actual nucleotide, rather than a representation of a mix of nucleotides that covers each possible pairing needed.
See also
Biology
Chromosome
Gene
Genetics
References
Further reading
DNA
Molecular biology | Nucleotide | [
"Chemistry",
"Biology"
] | 3,948 | [
"Biochemistry",
"Molecular biology"
] |
21,506 | https://en.wikipedia.org/wiki/Numerical%20analysis | Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.
Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.
The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.
Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used.
Applications
The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically:
Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations.
Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.
In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants.
Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research.
Insurance companies use numerical programs for actuarial analysis.
History
The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine,
but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications.
Key concepts
Direct and iterative methods
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.
As an example, consider the problem of solving
3x3 + 4 = 28
for the unknown quantity x.
For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57.
From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.
Conditioning
Ill-conditioned problem: Take the function . Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.
Well-conditioned problem: By contrast, evaluating the same function near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x).
Discretization
Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.
Generation and propagation of errors
The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem.
Round-off
Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are).
Truncation and discretization error
Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01.
Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type is even more inexact.
A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen.
Numerical stability and well-posed problems
An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem.
Areas of study
The field of numerical analysis includes many sub-disciplines. Some of the major ones are:
Computing values of functions
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic.
Interpolation, extrapolation, and regression
Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.
Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this.
Solving equations and systems of equations
Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation is linear while is not.
Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.
Solving eigenvalue or singular value problems
Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.
Optimization
Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints.
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.
Evaluating integrals
Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids.
Differential equations
Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations.
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.
Software
Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library.
Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here);
ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here).
The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here).
There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.
Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results.
Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis.
Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver".
See also
:Category:Numerical analysts
Analysis of algorithms
Approximation theory
Computational science
Computational physics
Gordon Bell Prize
Interval arithmetic
List of numerical analysis topics
Local linearization method
Numerical differentiation
Numerical Recipes
Probabilistic numerics
Symbolic-numeric computation
Validated numerics
Notes
References
Citations
Sources
David Kincaid and Ward Cheney: Numerical Analysis : Mathematics of Scientific Computing, 3rd Ed., AMS, ISBN 978-0-8218-4788-6 (2002).
(examples of the importance of accurate arithmetic).
External links
Journals
Numerische Mathematik, volumes 1–..., Springer, 1959–
volumes 1–66, 1959–1994 (searchable; pages are images).
Journal on Numerical Analysis (SINUM), volumes 1–..., SIAM, 1964–
Online texts
Numerical Recipes, William H. Press (free, downloadable previous editions)
First Steps in Numerical Analysis (archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner
CSEP (Computational Science Education Project), U.S. Department of Energy (archived 2017-08-01)
Numerical Methods, ch 3. in the Digital Library of Mathematical Functions
Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions (Abramowitz and Stegun)
Online course material
Numerical Methods (), Stuart Dalziel University of Cambridge
Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of Pennsylvania
Numerical methods, John D. Fenton University of Karlsruhe
Numerical Methods for Physicists, Anthony O’Hare Oxford University
Lectures in Numerical Analysis (archived), R. Radok Mahidol University
Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of Technology
Numerical Analysis for Engineering, D. W. Harder University of Waterloo
Introduction to Numerical Analysis, Doron Levy University of Maryland
Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton
Mathematical physics
Computational science | Numerical analysis | [
"Physics",
"Mathematics"
] | 3,552 | [
"Applied mathematics",
"Theoretical physics",
"Computational mathematics",
"Computational science",
"Mathematical relations",
"Numerical analysis",
"Mathematical physics",
"Approximations"
] |
21,514 | https://en.wikipedia.org/wiki/Nanomedicine | Nanomedicine is the medical application of nanotechnology. Nanomedicine ranges from the medical applications of nanomaterials and biological devices, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology such as biological machines. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter).
Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles.
Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future. The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging. Nanomedicine research is receiving funding from the US National Institutes of Health Common Fund program, supporting four nanomedicine development centers. The goal of funding this newer form of science is to further develop the biological, biochemical, and biophysical mechanisms of living tissues. More medical and drug companies today are becoming involved in nanomedical research and medications. These include Bristol-Myers Squibb, which focuses on drug delivery systems for immunology and fibrotic diseases; Moderna known for their COVID-19 vaccine and their work on mRNA therapeutics; and Nanobiotix, a company that focuses on cancer and currently has a drug in testing that increases the effect of radiation on targeted cells. More companies include Generation Bio, which specializes in genetic medicines and has developed the cell-targeted lipid nanoparticle, and Jazz Pharmaceuticals, which developed Vyxeos , a drug that treats acute myeloid leukemia, and concentrates on cancer and neuroscience. Cytiva is a company that specializes in producing delivery systems for genomic medicines that are non-viral, including mRNA vaccines and other therapies utilizing nucleic acid and Ratiopharm is known for manufacturing Pazenir, a drug for various cancers. Finally, Pacira specializes in pain management and is know for producing ZILRETTA for osteoarthritis knee pain, the first treatment without opioids.
Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013. In 2023, the global market was valued at $189.55 billion and is predicted to exceed $ 500 billion in the next ten years. As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy.
Drug delivery
Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles. This use of drug delivery systems was first proposed by Gregory Gregoriadis in 1974, who outlined liposomes as a drug delivery system for chemotherapy. The overall drug consumption and side-effects may be lowered significantly by depositing the active pharmaceutical agent in the diseased region only and in no higher dose than needed. Targeted drug delivery is intended to reduce the side effects of drugs in tandem decreases in consumption and treatment expenses. Additionally, targeted drug delivery reduces the side effects of crude or naturally occurring drugs by minimizing undesired exposure to healthy cells. Drug delivery focuses on maximizing bioavailability both at specific places in the body and over a period of time. This can potentially be achieved by molecular targeting by nanoengineered devices. A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery. The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug. Several nano-delivery drugs were on the market by 2019.
Drug delivery systems, lipid- or polymer-based nanoparticles, can be designed to improve the pharmacokinetics and biodistribution of the drug. However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients. When designed to avoid the body's defense mechanisms, nanoparticles have beneficial properties that can be used to improve drug delivery. Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility. Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials and the difficulty in targeting specific organs in the body. Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses. The toxicity of nanoparticles varies, depending on size, shape, and material. These factors also affect the build-up and organ damage that may occur. Nanoparticles are made to be long-lasting, but this causes them to be trapped within organs, specifically the liver and spleen, as they cannot be broken down or excreted. This build-up of non-biodegradable material has been observed to cause organ damage and inflammation in mice. Delivering magnetic nanoparticles to a tumor using uneven stationary magnetic fields may lead to enhanced tumor growth. In order to avoid this, alternating electromagnetic fields should be used.
Nanoparticles are under research for their potential to decrease antibiotic resistance or for various antimicrobial uses. Nanoparticles might also be used to circumvent multidrug resistance (MDR) mechanisms.
Systems under research
Advances in lipid nanotechnology were instrumental in engineering medical nanodevices and novel drug delivery systems, as well as in developing sensing applications. Another system for microRNA delivery under preliminary research is nanoparticles formed by the self-assembly of two different microRNAs to possibly shrink tumors. One potential application is based on small electromechanical systems, such as nanoelectromechanical systems being investigated for the active release of drugs and sensors for possible cancer treatment with iron nanoparticles or gold shells. Another system of drug delivery involving nanoparticles is the use of aquasomes, self-assembled nanoparticles with a nanocrystalline center, a coating made of a polyhydroxyl oligomer, covered in the desired drug, which protects it from dehydration and conformational change.
Applications
Some nanotechnology-based drugs that are commercially available or in human clinical trials include:
Doxil was originally approved by the FDA for the use on HIV-related Kaposi's sarcoma. It is now being used to also treat ovarian cancer and multiple myeloma. The drug is encased in liposomes, which helps to extend the life of the drug that is being distributed. Liposomes are self-assembling, spherical, closed colloidal structures that are composed of lipid bilayers that surround an aqueous space. The liposomes also help to increase the functionality and it helps to decrease the damage that the drug does to the heart muscles specifically.
Onivyde, liposome encapsulated irinotecan to treat metastatic pancreatic cancer, was approved by FDA in October 2015.
Rapamune is a nanocrystal-based drug that was approved by the FDA in 2000 to prevent organ rejection after transplantation. The nanocrystal components allow for increased drug solubility and dissolution rate, leading to improved absorption and high bioavailability.
Cabenuva is approved by FDA as cabotegravir extended-release injectable nano-suspension, plus rilpivirine extended-release injectable nano-suspension. It is indicated as a complete regimen for the treatment of HIV-1 infection in adults to replace the current antiretroviral regimen in those who are virologically suppressed (HIV-1 RNA less than 50 copies per mL) on a stable antiretroviral regimen with no history of treatment failure and with no known or suspected resistance to either cabotegravir or rilpivirine. This is the first FDA-approved injectable, complete regimen for HIV-1 infected adults that is administered once a month.
Imaging
In vivo imaging is another area where tools and devices are being developed. Using nanoparticle contrast agents, images such as ultrasound and MRI have a better distribution and improved contrast. In cardiovascular imaging, nanoparticles have potential to aid visualization of blood pooling, ischemia, angiogenesis, atherosclerosis, and focal areas where inflammation is present.
The small size of nanoparticles gives them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal. These nanoparticles are much brighter than organic dyes and only need one light source for activation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements, but this concern may be addressed by use of fluorescent dopants, substances added to create fluorescence.
Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up. While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes. The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent. As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble.
Sensing
Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. Silica nanoparticles, in particular, are inert from a photophysical perspective and can accumulate a large number of dye(s) within their shells. Gold nanoparticles tagged with short DNA segments can be used to detect genetic sequences in a sample. Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures.
Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood. Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair.
Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better a conventional laboratory test. These devices are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker. The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device. Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individual's tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer.
Sepsis treatment
In contrast to dialysis, which works on the principle of the size-related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification using nanoparticles allows specific targeting of substances. Additionally, larger compounds which are commonly not dialyzable can be removed.
The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties. Binding agents such as proteins, antibiotics, or synthetic ligands are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate. Applying an external magnetic field gradient exerts a force on the nanoparticles, allowing them to be separated from the bulk fluid, thus removing contaminants. This can neutralize the toxicity of sepsis, but runs the risk of nephrotoxicity and neurotoxicity.
The small size (< 100 nm) and large surface area of functionalized nanomagnets offer advantages properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption. These advantages include high loading capacity, high selectivity towards the target compound, fast diffusion, low hydrodynamic resistance, and low dosage requirements.
Tissue engineering
Nanotechnology may be used as part of tissue engineering to help reproduce, repair, or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. If successful, tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants. Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles to the polymer matrix at low concentrations (~0.2 weight %) significantly improves in the compressive and flexural mechanical properties of polymeric nanocomposites. These nanocomposites may potentially serve as novel, mechanically strong, lightweight bone implants.
For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery.
Another example is nanonephrology, the use of nanomedicine on the kidney.
The full potential and implications of nanotechnology uses within the tissue engineering are not yet fully understood, despite research spanning the past two decades.
Vaccine development
Today, a significant proportion of vaccines against viral diseases are created using nanotechnology. Solid lipid nanoparticles represent a novel delivery system for some vaccines against SARS-CoV-2 (the virus that causes COVID-19). In recent decades, nanosized adjuvants have been widely used to enhance immune responses to targeted vaccine antigens. Inorganic nanoparticles of aluminum, silica and clay, as well as organic nanoparticles based on polymers and lipids, are commonly used adjuvants within modern vaccine formulations. Nanoparticles of natural polymers such as chitosan are commonly used adjuvants in modern vaccine formulations. Ceria nanoparticles appear very promising for both enhancing vaccine responses and mitigating inflammation, as their adjuvanticity can be adjusted by modifying parameters such as size, crystallinity, surface state, and stoichiometry.
In addition, virus-like nanoparticles are also being researched. These structures allow vaccines to self-assemble without encapsulating viral RNA, making them non-infectious and incapable of replication. These virus-like nanoparticles are designed to elicit a strong immune response by using a self-assembled layer of virus capsid proteins.
Medical devices
Neuro-Electronic Interfacing
Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to connect and interact with the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer. A refuelable system implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a non-refuelable system implies that all power is drawn from internal energy storage, ceasing operation once the energy is depleted. A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed, using glucose from biofluids such as human blood or watermelons. One limitation to this innovation is the potential for electrical interference, leakage, or overheating due to power consumption. The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system. Current research is developing nanoparticle coatings for the electrodes to allow for improved recording and reduce interference.
Cell repair machines
Molecular nanotechnology is a speculative subfield of nanotechnology that explores the potential to engineer molecular assemblers—machines capable of reorganizing matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book Engines of Creation, with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him () the idea of a medical use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.
Regulatory Impacts
As the development of nanomedicine continues to develop and becomes a potential treatments for diseases, regulatory challenges have come to light. This section will highlight some of the regulatory considerations and challenges faced by the Food and Drug Administration (FDA), the European Medicine Agency (EMA), and each manufacturing organization. The major challenges that companies are reproducible manufacturing processes, scalability, availability of appropriate characterization methods, safety issues, and poor understandings of disease heterogeneity and patient preselection strategies. Despite these challenges, several therapeutic nanomedicine products have been approved by the FDA and EMA. In order to be approved for market, these therapies are evaluated for biocompatibility, immunotoxicity, as well as undergo a preclinical assessment.
The current scope of approved nanomedicine are mainly nano-drugs, but as the field continued to grow and more applications of nanomedicine progress to a marketable scale, more impacts and regulatory oversight will be needed.
See also
British Society for Nanomedicine
Biopharmaceutical
Colloidal gold
Heart nanotechnology
IEEE P1906.1 – Recommended Practice for Nanoscale and Molecular Communication Framework
Impalefection
Monitoring (medicine)
Nanobiotechnology
Nanoparticle–biomolecule conjugate
Nanozymes
Nanotechnology in fiction
Photodynamic therapy
Top-down and bottom-up design
References
Nanotechnology
Biotechnology | Nanomedicine | [
"Materials_science",
"Engineering",
"Biology"
] | 4,748 | [
"Materials science",
"Biotechnology",
"Nanomedicine",
"nan",
"Nanotechnology"
] |
21,523 | https://en.wikipedia.org/wiki/Neural%20network%20%28machine%20learning%29 | In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a model inspired by the structure and function of biological neural networks in animal brains.
An ANN consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process.
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.
Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information.
Training
Neural networks are typically trained through empirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data.
History
Early work
Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.
Historically, digital computers such as the von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956).
In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks, funded by the United States Office of Naval Research.
R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.
The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962) cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
Deep learning breakthroughs in the 1960s and 1970s
Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."
The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning.
Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967).
In 1976 transfer learning was introduced in neural networks learning.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
Backpropagation
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
Convolutional neural networks
Kunihiko Fukushima's convolutional neural network (CNN) architecture of 1979 also introduced max pooling, a popular downsampling procedure for CNNs. CNNs have become an essential tool for computer vision.
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.
In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.
From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments.
Recurrent neural networks
One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield(1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contains cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.
In 1982 a recurrent neural network, with an array architecture (rather than a multilayer perceptron architecture), named Crossbar Adaptive Array used direct recurrent connections from the output to the supervisor (teaching ) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks.
In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion. In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology.
Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology.
In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.
In 1991, Sepp Hochreiter's diploma thesis identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture.
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models.
Deep learning
Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3.
In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".
Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications.
Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net.
During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need.
It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.
Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture.
Models
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph.
An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons.
Artificial neurons
ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.
Organization
The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks.
Hyperparameter
A hyperparameter is a constant parameter whose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters include learning rate, the number of hidden layers and batch size. The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.
Learning
Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
Learning rate
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.
Cost function
While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) or because it arises from the model (e.g. in a probabilistic model the model's posterior probability can be used as an inverse cost).
Backpropagation
Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backprop calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.
Learning paradigms
Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task.
Supervised learning
Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
Unsupervised learning
In unsupervised learning, input data is given along with the cost function, some function of the data and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model where is a constant and the cost . Minimizing this cost produces a value of that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between and , whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
Reinforcement learning
In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Formally the environment is modeled as a Markov decision process (MDP) with states and actions . Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution , the observation distribution and the transition distribution , while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
Self-learning
Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation:
In situation s perform action a;
Receive consequence situation s';
Compute emotion of being in consequence situation v(s');
Update crossbar memory w'(a,s) = w(a,s) + v(s').
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it initially and only once receives initial emotions about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.
Neuroevolution
Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".
Stochastic neural network
Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions , or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks.
Other
In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation–maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks.
Modes
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
Types
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Some of the main breakthroughs include:
Convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data; where long short-term memory avoids the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads;
Competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
Network design
Using artificial neural networks requires an understanding of their characteristics.
Choice of model: This depends on the data representation and the application. Model parameters include the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc. ). Overly complex models learn slowly.
Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.
Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras.
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc.
Applications
Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include:
Function approximation, or regression analysis, (including time series prediction, fitness approximation, and modeling)
Data processing (including filtering, clustering, blind source separation, and compression)
Nonlinear system identification and control (including vehicle control, trajectory prediction, adaptive control, process control, and natural resource management)
Pattern recognition (including radar systems, face identification, signal classification, novelty detection, 3D reconstruction, object recognition, and sequential decision making)
Sequence recognition (including gesture, speech, and handwritten and printed text recognition)
Sensor data analysis (including image analysis)
Robotics (including directing manipulators and prostheses)
Data mining (including knowledge discovery in databases)
Finance (such as ex-ante models for specific financial long-run forecasts and artificial financial markets)
Quantum chemistry
General game playing
Generative AI
Data visualization
Machine translation
Social network filtering
E-mail spam filtering
Medical diagnosis
ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.
Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.
Theoretical properties
Computational power
The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
Capacity
A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.
Convergence
Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross some Saddle point which may lead the convergence to the wrong direction.
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fits target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.
Generalization and statistics
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters. Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error.
The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
Criticism
Training
A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.
Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC.
Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).
Theory
A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything". One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
Hardware
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons which require enormous CPU power and time.
Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
Practical counterexamples
Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.
Hybrid approaches
Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.
Dataset bias
Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases. These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute. This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon had to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets.
Gallery
Recent advancements and future directions
Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.
Image processing
In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance. This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.
Speech recognition
By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques. These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.
Natural language processing
In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies.
Control systems
In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.
Finance
ANNs are used for stock market prediction and credit scoring:
In investing, ANNs can process vast amounts of financial data, recognize complex patterns, and forecast stock market trends, aiding investors and risk managers in making informed decisions.
In credit scoring, ANNs offer data-driven, personalized assessments of creditworthiness, improving the accuracy of default predictions and automating the lending process.
ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies.
Medicine
ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.
Content creation
ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance, DALL-E is a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user. In the field of music, transformers are used to create original music for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020. Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.
See also
ADALINE
Autoencoder
Bio-inspired computing
Blue Brain Project
Catastrophic interference
Cognitive architecture
Connectionist expert system
Connectomics
Deep image prior
Digital morphogenesis
Efficiently updatable neural network
Evolutionary algorithm
Genetic algorithm
Hyperdimensional computing
In situ adaptive tabulation
Large width limits of neural networks
List of machine learning concepts
Memristor
Neural gas
Neural network software
Optical neural network
Parallel distributed processing
Philosophy of artificial intelligence
Predictive analytics
Quantum neural network
Support vector machine
Spiking neural network
Stochastic parrot
Tensor product network
References
Bibliography
PDF
created for National Science Foundation, Contract Number EET-8716324, and Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976 under Contract F33615-87-C-1499.
External links
A Brief Introduction to Neural Networks (D. Kriesel) – Illustrated, bilingual manuscript about artificial neural networks; Topics so far: Perceptrons, Backpropagation, Radial Basis Functions, Recurrent Neural Networks, Self Organizing Maps, Hopfield Networks.
Review of Neural Networks in Materials Science
Artificial Neural Networks Tutorial in three languages (Univ. Politécnica de Madrid)
Another introduction to ANN
Next Generation of Neural Networks – Google Tech Talks
Performance of Neural Networks
Neural Networks and Information
Computational statistics
Classification algorithms
Computational neuroscience
Market research
Mathematical psychology
Mathematical and quantitative methods (economics)
Bioinspiration | Neural network (machine learning) | [
"Mathematics",
"Engineering",
"Biology"
] | 10,228 | [
"Biological engineering",
"Mathematical psychology",
"Applied mathematics",
"Computational mathematics",
"Computational statistics",
"Bioinspiration"
] |
21,544 | https://en.wikipedia.org/wiki/Nuclear%20fusion | Nuclear fusion is a reaction in which two or more atomic nuclei (for example, nuclei of hydrogen isotopes deuterium and tritium), combine to form one or more atomic nuclei and neutrons. The difference in mass between the reactants and products is manifested as either the release or absorption of energy. This difference in mass arises as a result of the difference in nuclear binding energy between the atomic nuclei before and after the fusion reaction. Nuclear fusion is the process that powers active or main-sequence stars and other high-magnitude stars, where large amounts of energy are released.
A nuclear fusion process that produces atomic nuclei lighter than iron-56 or nickel-62 will generally release energy. These elements have a relatively small mass and a relatively large binding energy per nucleon. Fusion of nuclei lighter than these releases energy (an exothermic process), while the fusion of heavier nuclei results in energy retained by the product nucleons, and the resulting reaction is endothermic. The opposite is true for the reverse process, called nuclear fission. Nuclear fusion uses lighter elements, such as hydrogen and helium, which are in general more fusible; while the heavier elements, such as uranium, thorium and plutonium, are more fissionable. The extreme astrophysical event of a supernova can produce enough energy to fuse nuclei into elements heavier than iron.
History
American chemist William Draper Harkins was the first to propose the concept of nuclear fusion in 1915. Then in 1921, Arthur Eddington suggested hydrogen–helium fusion could be the primary source of stellar energy. Quantum tunneling was discovered by Friedrich Hund in 1927, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to demonstrate that large amounts of energy could be released by fusing small nuclei. Building on the early experiments in artificial nuclear transmutation by Patrick Blackett, laboratory fusion of hydrogen isotopes was accomplished by Mark Oliphant in 1932. In the remainder of that decade, the theory of the main cycle of nuclear fusion in stars was worked out by Hans Bethe.
Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project. The first artificial thermonuclear fusion reaction occurred during the 1951 Greenhouse Item test of the first boosted fission weapon, which uses a small amount of deuterium–tritium gas to enhance the fission yield. The first thermonuclear weapon detonation, where the vast majority of the yield comes from fusion, was the 1952 Ivy Mike test of a liquid deuterium-fusing device.
While fusion bomb detonations were loosely considered for energy production, the possibility of controlled and sustained reactions remained the scientific focus for peaceful fusion power. Research into developing controlled fusion inside fusion reactors has been ongoing since the 1930s, with Los Alamos National Laboratory's Scylla I device producing the first laboratory thermonuclear fusion in 1958, but the technology is still in its developmental phase.
The US National Ignition Facility, which uses laser-driven inertial confinement fusion, was designed with a goal of achieving a fusion energy gain factor (Q) of larger than one; the first large-scale laser target experiments were performed in June 2009 and ignition experiments began in early 2011. On 13 December 2022, the United States Department of Energy announced that on 5 December 2022, they had successfully accomplished break-even fusion, "delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output."
Prior to this breakthrough, controlled fusion reactions had been unable to produce break-even (self-sustaining) controlled fusion. The two most advanced approaches for it are magnetic confinement (toroid designs) and inertial confinement (laser designs). Workable designs for a toroidal reactor that theoretically will deliver ten times more fusion energy than the amount needed to heat plasma to the required temperatures are in development (see ITER). The ITER facility is expected to finish its construction phase in 2025. It will start commissioning the reactor that same year and initiate plasma experiments in 2025, but is not expected to begin full deuterium–tritium fusion until 2035.
Private companies pursuing the commercialization of nuclear fusion received $2.6 billion in private funding in 2021 alone, going to many notable startups including but not limited to Commonwealth Fusion Systems, Helion Energy Inc., General Fusion, TAE Technologies Inc. and Zap Energy Inc.
One of the most recent breakthroughs to date in maintaining a sustained fusion reaction occurred in France's WEST fusion reactor. It maintained a 90 million degree plasma for a record time of six minutes. This is a tokamak style reactor which is the same style as the upcoming ITER reactor.
Process
The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force, a manifestation of the strong interaction, which holds protons and neutrons tightly together in the atomic nucleus; and the Coulomb force, which causes positively charged protons in the nucleus to repel each other. Lighter nuclei (nuclei smaller than iron and nickel) are sufficiently small and proton-poor to allow the nuclear force to overcome the Coulomb force. This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up nuclei from lighter nuclei by fusion releases the extra energy from the net attraction of particles. For larger nuclei, however, no energy is released, because the nuclear force is short-range and cannot act across larger nuclei.
Fusion powers stars and produces virtually all elements in a process called nucleosynthesis. The Sun is a main-sequence star, and, as such, generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen and makes 616 million metric tons of helium each second. The fusion of lighter elements in stars releases energy and the mass that always accompanies it. For example, in the fusion of two hydrogen nuclei to form helium, 0.645% of the mass is carried away in the form of kinetic energy of an alpha particle or other forms of energy, such as electromagnetic radiation.
It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen. When accelerated to high enough speeds, nuclei can overcome this electrostatic repulsion and be brought close enough such that the attractive nuclear force is greater than the repulsive Coulomb force. The strong force grows rapidly once the nuclei are close enough, and the fusing nucleons can essentially "fall" into each other and the result is fusion; this is an exothermic process.
Energy released in most nuclear reactions is much larger than in chemical reactions, because the binding energy that holds a nucleus together is greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is —less than one-millionth of the released in the deuterium–tritium (D–T) reaction shown in the adjacent diagram. Fusion reactions have an energy density many times greater than nuclear fission; the reactions produce far greater energy per unit of mass even though individual fission reactions are generally much more energetic than individual fusion ones, which are themselves millions of times more energetic than chemical reactions. Via the mass–energy equivalence, fusion yields a 0.7% efficiency of reactant mass into energy. This can be only be exceeded by the extreme cases of the accretion process involving neutron stars or black holes, approaching 40% efficiency, and antimatter annihilation at 100% efficiency. (The complete conversion of one gram of matter would release of energy.)
In stars
An important fusion process is the stellar nucleosynthesis that powers stars, including the Sun. In the 20th century, it was recognized that the energy released from nuclear fusion reactions accounts for the longevity of stellar heat and light. The fusion of nuclei in a star, starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new nuclei. Different reaction chains are involved, depending on the mass of the star (and therefore the pressure and temperature in its core).
Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was unknown; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation . This was a particularly remarkable development since at that time fusion and thermonuclear energy had not yet been discovered, nor even that stars are largely composed of hydrogen (see metallicity). Eddington's paper reasoned that:
The leading theory of stellar energy, the contraction hypothesis, should cause the rotation of a star to visibly speed up due to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening.
The only other known plausible source of energy was conversion of matter to energy; Einstein had shown some years earlier that a small amount of matter was equivalent to a large amount of energy.
Francis Aston had also recently shown that the mass of a helium atom was about 0.8% less than the mass of the four hydrogen atoms which would, combined, form a helium atom (according to the then-prevailing theory of atomic structure which held atomic weight to be the distinguishing property between elements; work by Henry Moseley and Antonius van den Broek would later show that nucleic charge was the distinguishing property and that a helium nucleus, therefore, consisted of two hydrogen nuclei plus additional mass). This suggested that if such a combination could happen, it would release considerable energy as a byproduct.
If a star contained just 5% of fusible hydrogen, it would suffice to explain how stars got their energy. (It is now known that most 'ordinary' stars are usually made of around 70% to 75% hydrogen)
Further elements might also be fused, and other scientists had speculated that stars were the "crucible" in which light elements combined to create heavy elements, but without more accurate measurements of their atomic masses nothing more could be said at the time.
All of these speculations were proven correct in the following decades.
The primary source of solar energy, and that of similar size stars, is the fusion of hydrogen to form helium (the proton–proton chain reaction), which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle, with the release of two positrons and two neutrinos (which changes two of the protons into neutrons), and energy. In heavier stars, the CNO cycle and other processes are more important. As a star uses up a substantial fraction of its hydrogen, it begins to synthesize heavier elements. The heaviest elements are synthesized by fusion that occurs when a more massive star undergoes a violent supernova at the end of its life, a process known as supernova nucleosynthesis.
Requirements
A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances, two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through coulomb forces.
When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all the other nucleons of the nucleus (if the atom is small enough), but primarily to its immediate neighbors due to the short range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface-area-to-volume ratio, the binding energy per nucleon due to the nuclear force generally increases with the size of the nucleus but approaches a limiting value corresponding to that of a nucleus with a diameter of about four nucleons. It is important to keep in mind that nucleons are quantum objects. So, for example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one from the other, such as which one is in the interior and which is on the surface, is in fact meaningless, and the inclusion of quantum mechanics is therefore necessary for proper calculations.
The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows.
The net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons, corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are , , , and . Even though the nickel isotope, , is more stable, the iron isotope is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create through the alpha process.
An exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium, the next heavier element. This is because protons and neutrons are fermions, which according to the Pauli exclusion principle cannot exist in the same nucleus in exactly the same state. Each proton or neutron's energy state in a nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large binding energy because its nucleus consists of two protons and two neutrons (it is a doubly magic nucleus), so all four of its nucleons can be in the ground state. Any additional nucleons would have to go into higher energy states. Indeed, the helium-4 nucleus is so tightly bound that it is commonly treated as a single quantum mechanical particle in nuclear physics, namely, the alpha particle.
The situation is similar if two nuclei are brought together. As they approach each other, all the protons in one nucleus repel all the protons in the other. Not until the two nuclei actually come close enough for long enough so the strong attractive nuclear force can take over and overcome the repulsive electrostatic force. This can also be described as the nuclei overcoming the so-called Coulomb barrier. The kinetic energy to achieve this can be lower than the barrier itself because of quantum tunneling.
The Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge. A diproton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its extremely tight binding, is one of the products.
Using deuterium–tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV. The (intermediate) result of the fusion is an unstable 5He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier.
The reaction cross section (σ) is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross-section and velocity. This average is called the 'reactivity', denoted . The reaction rate (fusions per volume per time) is times the product of the reactant number densities:
If a species of nuclei is reacting with a nucleus like itself, such as the DD reaction, then the product must be replaced by .
increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state.
The significance of as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion. This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current advanced technical state.
Artificial fusion
Thermonuclear fusion
Thermonuclear fusion is the process of atomic nuclei combining or "fusing" using high temperatures to drive them close enough together for this to become possible. Such temperatures cause the matter to become a plasma and, if confined, fusion reactions may occur due to collisions with extreme thermal kinetic energies of the particles. There are two forms of thermonuclear fusion: uncontrolled, in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons ("hydrogen bombs") and in most stars; and controlled, where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed for constructive purposes.
Temperature is a measure of the average kinetic energy of particles, so by heating the material it will gain energy. After reaching sufficient temperature, given by the Lawson criterion, the energy of accidental collisions within the plasma is high enough to overcome the Coulomb barrier and the particles may fuse together.
In a deuterium–tritium fusion reaction, for example, the energy necessary to overcome the Coulomb barrier is 0.1 MeV. Converting between energy and temperature shows that the 0.1 MeV barrier would be overcome at a temperature in excess of 1.2 billion kelvin.
There are two effects that are needed to lower the actual temperature. One is the fact that temperature is the average kinetic energy, implying that some nuclei at this temperature would actually have much higher energy than 0.1 MeV, while others would be much lower. It is the nuclei in the high-energy tail of the velocity distribution that account for most of the fusion reactions. The other effect is quantum tunnelling. The nuclei do not actually have to have enough energy to overcome the Coulomb barrier completely. If they have nearly enough energy, they can tunnel through the remaining barrier. For these reasons fuel at lower temperatures will still undergo fusion events, at a lower rate.
Thermonuclear fusion is one of the methods being researched in the attempts to produce fusion power. If thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint.
Beam–beam or beam–target fusion
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.
Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross-sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves.
A number of attempts to recirculate the ions that "miss" collisions have been made over the years. One of the better-known attempts in the 1970s was Migma, which used a unique particle storage ring to capture ions into circular orbits and return them to the reaction area. Theoretical calculations made during funding reviews pointed out that the system would have significant difficulty scaling up to contain enough fusion fuel to be relevant as a power source. In the 1990s, a new arrangement using a field-reversed configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies . A closely related approach is to merge two FRC's rotating in opposite directions, which is being actively studied by Helion Energy. Because these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-11B that are too difficult to attempt using conventional approaches.
Muon-catalyzed fusion
Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction has been unsuccessful because of the high energy required to create muons, their short 2.2 μs half-life, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion.
Other principles
Some other confinement principles have been investigated.
Antimatter-initialized fusion uses small amounts of antimatter to trigger a tiny fusion explosion. This has been studied primarily in the context of making nuclear pulse propulsion, and pure fusion bombs feasible. This is not near becoming a practical power source, due to the cost of manufacturing antimatter alone.
Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from , combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. At the estimated energy levels, the D–D fusion reaction may occur, producing helium-3 and a 2.45 MeV neutron. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces. D–T fusion reactions have been observed with a tritiated erbium target.
Nuclear fusion–fission hybrid (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to the delays in the realization of pure fusion.
Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system is the only fusion power system that could be demonstrated to work using existing technology. However, it would also require a large, continuous supply of nuclear bombs, making the economics of such a system rather questionable.
Bubble fusion also called sonofusion was a proposed mechanism for achieving fusion via sonic cavitation which rose to prominence in the early 2000s. Subsequent attempts at replication failed and the principal investigator, Rusi Taleyarkhan, was judged guilty of research misconduct in 2008.
Confinement in thermonuclear fusion
The key problem in achieving thermonuclear fusion is how to confine the hot plasma. Due to the high temperature, the plasma cannot be in direct contact with any solid material, so it has to be located in a vacuum. Also, high temperatures imply high pressures. The plasma tends to expand immediately and some force is necessary to act against it. This force can take one of three forms: gravitation in stars, magnetic forces in magnetic confinement fusion reactors, or inertial as the fusion reaction may occur before the plasma starts to expand, so the plasma's inertia is keeping the material together.
Gravitational confinement
One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity. The mass needed, however, is so great that gravitational confinement is only found in stars—the least massive stars capable of sustained fusion are red dwarfs, while brown dwarfs are able to fuse deuterium and lithium if they are of sufficient mass. In stars heavy enough, after the supply of hydrogen is exhausted in their cores, their cores (or a shell around the core) start fusing helium to carbon. In the most massive stars (at least 8–11 solar masses), the process is continued until some of their energy is produced by fusing lighter elements to iron. As iron has one of the highest binding energies, reactions producing heavier elements are generally endothermic. Therefore, significant amounts of heavier elements are not formed during stable periods of massive star evolution, but are formed in supernova explosions. Some lighter stars also form these elements in the outer parts of the stars over long periods of time, by absorbing energy from fusion in the inside of the star, by absorbing neutrons that are emitted from the fusion process.
All of the elements heavier than iron have some potential energy to release, in theory. At the extremely heavy end of element production, these heavier elements can produce energy in the process of being split again back toward the size of iron, in the process of nuclear fission. Nuclear fission thus releases energy that has been stored, sometimes billions of years before, during stellar nucleosynthesis.
Magnetic confinement
Electrically charged particles (such as fuel ions) will follow magnetic field lines (see Guiding centre). The fusion fuel can therefore be trapped using a strong magnetic field. A variety of magnetic configurations exist, including the toroidal geometries of tokamaks and stellarators and open-ended mirror confinement systems.
Inertial confinement
A third confinement principle is to apply a rapid pulse of energy to a large part of the surface of a pellet of fusion fuel, causing it to simultaneously "implode" and heat to very high pressure and temperature. If the fuel is dense enough and hot enough, the fusion reaction rate will be high enough to burn a significant fraction of the fuel before it has dissipated. To achieve these extreme conditions, the initially cold fuel must be explosively compressed. Inertial confinement is used in the hydrogen bomb, where the driver is x-rays created by a fission bomb. Inertial confinement is also attempted in "controlled" nuclear fusion, where the driver is a laser, ion, or electron beam, or a Z-pinch. Another method is to use conventional high explosive material to compress a fuel to fusion conditions. The UTIAS explosive-driven-implosion facility was used to produce stable, centred and focused hemispherical implosions to generate neutrons from D-D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium-oxygen. The other successful method was using a miniature Voitenko compressor, where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere.
Electrostatic confinement
There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are very low due to competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a Penning trap and the polywell. The technology is relatively immature, however, and many scientific and engineering questions remain.
The most well known Inertial electrostatic confinement approach is the fusor. Starting in 1999, a number of amateurs have been able to do amateur fusion using these homemade devices. Other IEC devices include: the Polywell, MIX POPS and Marble concepts.
Important reactions
Stellar reaction chains
At the temperatures and densities in stellar cores, the rates of fusion reactions are notoriously slow. For example, at solar core temperature (T ≈ 15 MK) and density (160 g/cm3), the energy release rate is only 276 μW/cm3—about a quarter of the volumetric rate at which a resting human body generates heat. Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates depend on density as well as temperature, and most fusion schemes operate at relatively low densities, those methods are strongly dependent on higher temperatures. The fusion rate as a function of temperature (exp(−E/kT)), leads to the need to achieve temperatures in terrestrial reactors 10–100 times higher than in stellar interiors: T ≈ .
Criteria and candidates for terrestrial reactions
In artificial fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so reactions with larger cross-sections are chosen. Another concern is the production of neutrons, which activate the reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy and tritium breeding. Reactions that release no neutrons are referred to as aneutronic.
To be a useful energy source, a fusion reaction must satisfy several criteria. It must:
Be exothermic This limits the reactants to the low Z (number of protons) side of the curve of binding energy. It also makes helium the most common product because of its extraordinarily tight binding, although and also show up.
Involve low atomic number (Z) nuclei This is because the electrostatic repulsion that must be overcome before the nuclei are close enough to fuse ( Coulomb barrier ) is directly related to the number of protons it contains – its atomic number.
Have two reactants At anything less than stellar densities, three-body collisions are too improbable. In inertial confinement, both stellar densities and temperatures are exceeded to compensate for the shortcomings of the third parameter of the Lawson criterion, ICF's very short confinement time.
Have two or more products This allows simultaneous conservation of energy and momentum without relying on the electromagnetic force.
Conserve both protons and neutrons The cross sections for the weak interaction are too small.
Few reactions meet these criteria. The following are those with the largest cross sections:
:{| border="0"
|- style="height:2em;"
|(1) || ||+ || ||→ || ||( ||style="text-align:right;"|||) ||+ ||n0 ||( ||style="text-align:right;"|||)
|- style="height:2em;"
|(2i) || ||+ || ||→ || ||( ||style="text-align:right;"|||) ||+ ||p+ ||( ||style="text-align:right;"|||) || || || || || || 50%
|- style="height:2em;"
|(2ii) || || || ||→ || ||( ||style="text-align:right;"|||) ||+ ||n0 ||( ||style="text-align:right;"|||) || || || || || || 50%
|- style="height:2em;"
|(3) || ||+ || ||→ || ||( ||style="text-align:right;"|||) ||+ ||p+ ||( ||style="text-align:right;"|||)
|- style="height:2em;"
|(4) || ||+ || ||→ || || || || ||+ ||2 n0 || || || || || ||+ ||style="text-align:right;"|
|- style="height:2em;"
|(5) || ||+ || ||→ || || || || ||+ ||2 p+ || || || || || ||+ ||style="text-align:right;"|
|- style="height:2em;"
|(6i) || ||+ || ||→ || || || || ||+ ||p+ ||+ ||n0 || || || ||+ ||style="text-align:right;"||| || 57%
|- style="height:2em;"
|(6ii) || || || ||→ || ||( ||style="text-align:right;"|||) ||+ || ||( ||style="text-align:right;"|||) || || || || || || 43%
|- style="height:2em;"
|(7i) || ||+ || ||→ ||2 ||+ ||style="text-align:right;"|
|- style="height:2em;"
|(7ii) || || || ||→ || ||+ || || ||+ ||n0 || || || || || ||+ ||style="text-align:right;"|
|- style="height:2em;"
|(7iii) || || || ||→ || ||+ ||p+ || || || || || || || || ||+ ||style="text-align:right;"|
|- style="height:2em;"
|(7iv) || || || ||→ || ||+ ||n0 || || || || || || || || ||+ ||style="text-align:right;"|
|- style="height:2em;"
|(8) ||p+ ||+ || ||→ || ||( ||style="text-align:right;"|||) ||+ || ||( ||style="text-align:right;"|||)
|- style="height:2em;"
|(9) || ||+ || ||→ ||2 ||+ ||p+ || || || || || || || || ||+ ||style="text-align:right;"|
|- style="height:2em;"
|(10) ||p+ ||+ || ||→ ||3 || || || || || || || || || || ||+ ||style="text-align:right;"|
|}
For reactions with two products, the energy is divided between them in inverse proportion to their masses, as shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in more than one set of products, the branching ratios are given.
Some reaction candidates can be eliminated at once. The D–6Li reaction has no advantage compared to p+– because it is roughly as difficult to burn but produces substantially more neutrons through – side reactions. There is also a p+– reaction, but the cross section is far too low, except possibly when Ti > 1 MeV, but at such high temperatures an endothermic, direct neutron-producing reaction also becomes very significant. Finally there is also a p+– reaction, which is not only difficult to burn, but can be easily induced to split into two alpha particles and a neutron.
In addition to the fusion reactions, the following reactions with neutrons are important in order to "breed" tritium in "dry" fusion bombs and some proposed fusion reactors:
:{| border="0"
|- style="height:2em;"
|n0 ||+ || ||→ || ||+ || + 4.784 MeV
|- style="height:2em;"
|n0 ||+ || ||→ || ||+ || + n0 − 2.467 MeV
|}
The latter of the two equations was unknown when the U.S. conducted the Castle Bravo fusion bomb test in 1954. Being just the second fusion bomb ever tested (and the first to use lithium), the designers of the Castle Bravo "Shrimp" had understood the usefulness of 6Li in tritium production, but had failed to recognize that 7Li fission would greatly increase the yield of the bomb. While 7Li has a small neutron cross-section for low neutron energies, it has a higher cross section above 5 MeV. The 15 Mt yield was 150% greater than the predicted 6 Mt and caused unexpected exposure to fallout.
To evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released, one needs to know something about the nuclear cross section. Any given fusion device has a maximum plasma pressure it can sustain, and an economical device would always operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is chosen so that is a maximum. This is also the temperature at which the value of the triple product required for ignition is a minimum, since that required value is inversely proportional to (see Lawson criterion). (A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.) This optimum temperature and the value of at that temperature is given for a few of these reactions in the following table.
Note that many of the reactions form chains. For instance, a reactor fueled with and creates some , which is then possible to use in the – reaction if the energies are "right". An elegant idea is to combine the reactions (8) and (9). The from reaction (8) can react with in reaction (9) before completely thermalizing. This produces an energetic proton, which in turn undergoes reaction (8) before thermalizing. Detailed analysis shows that this idea would not work well, but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate.
Abundance of the nuclear fusion fuels
Neutronicity, confinement requirement, and power density
Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature and cross section discussed above, we must consider the total energy of the fusion products Efus, the energy of the charged fusion products Ech, and the atomic number Z of the non-hydrogenic reactant.
Specification of the – reaction entails some difficulties, though. To begin with, one must average over the two branches (2i) and (2ii). More difficult is to decide how to treat the and products. burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The – reaction is optimized at a much higher temperature, so the burnup at the optimum – temperature may be low. Therefore, it seems reasonable to assume the but not the gets burned up and adds its energy to the net reaction, which means the total reaction would be the sum of (2i), (2ii), and (1):
5 → + 2 n0 + + p+, Efus = 4.03 + 17.6 + 3.27 = 24.9 MeV, Ech = 4.03 + 3.5 + 0.82 = 8.35 MeV.
For calculating the power of a reactor (in which the reaction rate is determined by the D–D step), we count the – fusion energy per D–D reaction as Efus = (4.03 MeV + 17.6 MeV) × 50% + (3.27 MeV) × 50% = 12.5 MeV and the energy in charged particles as Ech = (4.03 MeV + 3.5 MeV) × 50% + (0.82 MeV) × 50% = 4.2 MeV. (Note: if the tritium ion reacts with a deuteron while it still has a large kinetic energy, then the kinetic energy of the helium-4 produced may be quite different from 3.5 MeV, so this calculation of energy in charged particles is only an approximation of the average.) The amount of energy per deuteron consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225 million MJ per kilogram of deuterium).
Another unique aspect of the – reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate.
With this choice, we tabulate parameters for four of the most important reactions
The last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological shielding, remote handling, and safety. For the first two reactions it is calculated as . For the last two reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions that produce neutrons in a plasma in thermal equilibrium.
Of course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means that particle density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor . Therefore, the rate for these reactions is reduced by the same factor, on top of any differences in the values of . On the other hand, because the – reaction has only one reactant, its rate is twice as high as when the fuel is divided between two different hydrogenic species, thus creating a more efficient reaction.
Thus there is a "penalty" of for non-hydrogenic fuels arising from the fact that they require more electrons, which take up pressure without participating in the fusion reaction. (It is usually a good assumption that the electron temperature will be nearly equal to the ion temperature. Some authors, however, discuss the possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a "hot ion mode", the "penalty" would not apply.) There is at the same time a "bonus" of a factor 2 for – because each ion can react with any of the other ions, not just a fraction of them.
We can now compare these reactions in the following table.
The maximum value of is taken from a previous table. The "penalty/bonus" factor is that related to a non-hydrogenic reactant or a single-species reaction. The values in the column "inverse reactivity" are found by dividing by the product of the second and third columns. It indicates the factor by which the other reactions occur more slowly than the – reaction under comparable conditions. The column "Lawson criterion" weights these results with Ech and gives an indication of how much more difficult it is to achieve ignition with these reactions, relative to the difficulty for the – reaction. The next-to-last column is labeled "power density" and weights the practical reactivity by Efus. The final column indicates how much lower the fusion power density of the other reactions is compared to the – reaction and can be considered a measure of the economic potential.
Bremsstrahlung losses in quasineutral, isotropic plasmas
The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma. The electrons will generally have a temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-ray radiation of 10–30 keV energy, a process known as Bremsstrahlung.
The huge size of the Sun and stars means that the x-rays produced in this process will not escape and will deposit their energy back into the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and converted into heat) in less than mm thickness of stainless steel (which is part of a reactor's shield). This means the bremsstrahlung process is carrying energy out of the plasma, cooling it.
The ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is generally maximized at a much higher temperature than that which maximizes the power density (see the previous subsection). The following table shows estimates of the optimum temperature and the power ratio at that temperature for several reactions:
The actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one, the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However, because the fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly to the electrons. Secondly, the ions in the plasma are assumed to be purely fuel ions. In practice, there will be a significant proportion of impurity ions, which will then lower the ratio. In particular, the fusion products themselves must remain in the plasma until they have given up their energy, and will remain for some time after that in any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too.
The temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the temperature that maximizes the power density and minimizes the required value of the fusion triple product. This will not change the optimum operating point for – very much because the Bremsstrahlung fraction is low, but it will push the other fuels into regimes where the power density relative to – is even lower and the required confinement even more difficult to achieve. For – and –, Bremsstrahlung losses will be a serious, possibly prohibitive problem. For –, p+– and p+– the Bremsstrahlung losses appear to make a fusion reactor using these fuels with a quasineutral, isotropic plasma impossible. Some ways out of this dilemma have been considered but rejected. This limitation does not apply to non-neutral and anisotropic plasmas; however, these have their own challenges to contend with.
Mathematical description of cross section
Fusion under classical physics
In a classical picture, nuclei can be understood as hard spheres that repel each other through the Coulomb force but fuse once the two spheres come close enough for contact. Estimating the radius of an atomic nuclei as about one femtometer, the energy needed for fusion of two hydrogen is:
This would imply that for the core of the sun, which has a Boltzmann distribution with a temperature of around 1.4 keV, the probability hydrogen would reach the threshold is , that is, fusion would never occur. However, fusion in the sun does occur due to quantum mechanics.
Parameterization of cross section
The probability that fusion occurs is greatly increased compared to the classical picture, thanks to the smearing of the effective radius as the de Broglie wavelength as well as quantum tunneling through the potential barrier. To determine the rate of fusion reactions, the value of most interest is the cross section, which describes the probability that particles will fuse by giving a characteristic area of interaction. An estimation of the fusion cross-sectional area is often broken into three pieces:
where is the geometric cross section, is the barrier transparency and is the reaction characteristics of the reaction.
is of the order of the square of the de Broglie wavelength where is the reduced mass of the system and is the center of mass energy of the system.
can be approximated by the Gamow transparency, which has the form: where is the Gamow factor and comes from estimating the quantum tunneling probability through the potential barrier.
contains all the nuclear physics of the specific reaction and takes very different values depending on the nature of the interaction. However, for most reactions, the variation of is small compared to the variation from the Gamow factor and so is approximated by a function called the astrophysical S-factor, , which is weakly varying in energy. Putting these dependencies together, one approximation for the fusion cross section as a function of energy takes the form:
More detailed forms of the cross-section can be derived through nuclear physics-based models and R-matrix theory.
Formulas of fusion cross sections
The Naval Research Lab's plasma physics formulary gives the total cross section in barns as a function of the energy (in keV) of the incident particle towards a target ion at rest fit by the formula:
with the following coefficient values:
Bosch-Hale also reports a R-matrix calculated cross sections fitting observation data with Padé rational approximating coefficients. With energy in units of keV and cross sections in units of millibarn, the factor has the form:
, with the coefficient values:
where
Maxwell-averaged nuclear cross sections
In fusion systems that are in thermal equilibrium, the particles are in a Maxwell–Boltzmann distribution, meaning the particles have a range of energies centered around the plasma temperature. The sun, magnetically confined plasmas and inertial confinement fusion systems are well modeled to be in thermal equilibrium. In these cases, the value of interest is the fusion cross-section averaged across the Maxwell–Boltzmann distribution. The Naval Research Lab's plasma physics formulary tabulates Maxwell averaged fusion cross sections reactivities in .
For energies the data can be represented by:
with in units of keV.
See also
References
Further reading
External links
NuclearFiles.org – A repository of documents related to nuclear power.
Annotated bibliography for nuclear fusion from the Alsos Digital Library for Nuclear Issues
NRL Fusion Formulary
Physical phenomena
Energy conversion
Neutron sources
Nuclear chemistry
Nuclear physics | Nuclear fusion | [
"Physics",
"Chemistry"
] | 10,765 | [
"Physical phenomena",
"Nuclear chemistry",
"nan",
"Nuclear physics",
"Nuclear fusion"
] |
21,664 | https://en.wikipedia.org/wiki/Nebula | A nebula (; : nebulae, or nebulas) is a distinct luminescent part of interstellar medium, which can consist of ionized, neutral, or molecular hydrogen and also cosmic dust. Nebulae are often star-forming regions, such as in the Pillars of Creation in the Eagle Nebula. In these regions, the formations of gas, dust, and other materials "clump" together to form denser regions, which attract further matter and eventually become dense enough to form stars. The remaining material is then thought to form planets and other planetary system objects.
Most nebulae are of vast size; some are hundreds of light-years in diameter. A nebula that is visible to the human eye from Earth would appear larger, but no brighter, from close by. The Orion Nebula, the brightest nebula in the sky and occupying an area twice the angular diameter of the full Moon, can be viewed with the naked eye but was missed by early astronomers. Although denser than the space surrounding them, most nebulae are far less dense than any vacuum created on Earth (10 to 10 molecules per cubic centimeter) – a nebular cloud the size of the Earth would have a total mass of only a few kilograms. Earth's air has a density of approximately 10 molecules per cubic centimeter; by contrast, the densest nebulae can have densities of 10 molecules per cubic centimeter. Many nebulae are visible due to fluorescence caused by embedded hot stars, while others are so diffused that they can be detected only with long exposures and special filters. Some nebulae are variably illuminated by T Tauri variable stars.
Originally, the term "nebula" was used to describe any diffused astronomical object, including galaxies beyond the Milky Way. The Andromeda Galaxy, for instance, was once referred to as the Andromeda Nebula (and spiral galaxies in general as "spiral nebulae") before the true nature of galaxies was confirmed in the early 20th century by Vesto Slipher, Edwin Hubble, and others. Edwin Hubble discovered that most nebulae are associated with stars and illuminated by starlight. He also helped categorize nebulae based on the type of light spectra they produced.
Observational history
Around 150 AD, Ptolemy recorded, in books VII–VIII of his Almagest, five stars that appeared nebulous. He also noted a region of nebulosity between the constellations Ursa Major and Leo that was not associated with any star. The first true nebula, as distinct from a star cluster, was mentioned by the Muslim Persian astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars (964). He noted "a little cloud" where the Andromeda Galaxy is located. He also cataloged the Omicron Velorum star cluster as a "nebulous star" and other nebulous objects, such as Brocchi's Cluster. The supernovas that created the Crab Nebula, SN 1054, was observed by Arabic and Chinese astronomers in 1054.
In 1610, Nicolas-Claude Fabri de Peiresc discovered the Orion Nebula using a telescope. This nebula was also observed by Johann Baptist Cysat in 1618. However, the first detailed study of the Orion Nebula was not performed until 1659 by Christiaan Huygens, who also believed he was the first person to discover this nebulosity.
In 1715, Edmond Halley published a list of six nebulae. This number steadily increased during the century, with Jean-Philippe de Cheseaux compiling a list of 20 (including eight not previously known) in 1746. From 1751 to 1753, Nicolas-Louis de Lacaille cataloged 42 nebulae from the Cape of Good Hope, most of which were previously unknown. Charles Messier then compiled a catalog of 103 "nebulae" (now called Messier objects, which included what are now known to be galaxies) by 1781; his interest was detecting comets, and these were objects that might be mistaken for them.
The number of nebulae was then greatly increased by the efforts of William Herschel and his sister, Caroline Herschel. Their Catalogue of One Thousand New Nebulae and Clusters of Stars was published in 1786. A second catalog of a thousand was published in 1789, and the third and final catalog of 510 appeared in 1802. During much of their work, William Herschel believed that these nebulae were merely unresolved clusters of stars. In 1790, however, he discovered a star surrounded by nebulosity and concluded that this was a true nebulosity rather than a more distant cluster.
Beginning in 1864, William Huggins examined the spectra of about 70 nebulae. He found that roughly a third of them had the emission spectrum of a gas. The rest showed a continuous spectrum and were thus thought to consist of a mass of stars. A third category was added in 1912 when Vesto Slipher showed that the spectrum of the nebula that surrounded the star Merope matched the spectra of the Pleiades open cluster. Thus, the nebula radiates by reflected star light.
In 1923, following the Great Debate, it became clear that many "nebulae" were in fact galaxies far from the Milky Way.
Slipher and Edwin Hubble continued to collect the spectra from many different nebulae, finding 29 that showed emission spectra and 33 that had the continuous spectra of star light. In 1922, Hubble announced that nearly all nebulae are associated with stars and that their illumination comes from star light. He also discovered that the emission spectrum nebulae are nearly always associated with stars having spectral classifications of B or hotter (including all O-type main sequence stars), while nebulae with continuous spectra appear with cooler stars. Both Hubble and Henry Norris Russell concluded that the nebulae surrounding the hotter stars are transformed in some manner.
Formation
There are a variety of formation mechanisms for the different types of nebulae. Some nebulae form from gas that is already in the interstellar medium while others are produced by stars. Examples of the former case are giant molecular clouds, the coldest, densest phase of interstellar gas, which can form by the cooling and condensation of more diffuse gas. Examples of the latter case are planetary nebulae formed from material shed by a star in late stages of its stellar evolution.
Star-forming regions are a class of emission nebula associated with giant molecular clouds. These form as a molecular cloud collapses under its own weight, producing stars. Massive stars may form in the center, and their ultraviolet radiation ionizes the surrounding gas, making it visible at optical wavelengths. The region of ionized hydrogen surrounding the massive stars is known as an H II region while the shells of neutral hydrogen surrounding the H II region are known as photodissociation region. Examples of star-forming regions are the Orion Nebula, the Rosette Nebula and the Omega Nebula. Feedback from star-formation, in the form of supernova explosions of massive stars, stellar winds or ultraviolet radiation from massive stars, or outflows from low-mass stars may disrupt the cloud, destroying the nebula after several million years.
Other nebulae form as the result of supernova explosions; the death throes of massive, short-lived stars. The materials thrown off from the supernova explosion are then ionized by the energy and the compact object that its core produces. One of the best examples of this is the Crab Nebula, in Taurus. The supernova event was recorded in the year 1054 and is labeled SN 1054. The compact object that was created after the explosion lies in the center of the Crab Nebula and its core is now a neutron star.
Still other nebulae form as planetary nebulae. This is the final stage of a low-mass star's life, like Earth's Sun. Stars with a mass up to 8–10 solar masses evolve into red giants and slowly lose their outer layers during pulsations in their atmospheres. When a star has lost enough material, its temperature increases and the ultraviolet radiation it emits can ionize the surrounding nebula that it has thrown off. The Sun will produce a planetary nebula and its core will remain behind in the form of a white dwarf.
Types
Classical types
Objects named nebulae belong to four major groups. Before their nature was understood, galaxies ("spiral nebulae") and star clusters too distant to be resolved as stars were also classified as nebulae, but no longer are.
H II regions, large diffuse nebulae containing ionized hydrogen
Planetary nebulae
Supernova remnants (e.g., Crab Nebula)
Dark nebulae
Not all cloud-like structures are nebulae; Herbig–Haro objects are an example.
Flux Nebulae
Diffuse nebulae
Most nebulae can be described as diffuse nebulae, which means that they are extended and contain no well-defined boundaries. Diffuse nebulae can be divided into emission nebulae, reflection nebulae and dark nebulae.
Visible light nebulae may be divided into emission nebulae, which emit spectral line radiation from excited or ionized gas (mostly ionized hydrogen); they are often called H II regions, H II referring to ionized hydrogen), and reflection nebulae which are visible primarily due to the light they reflect.
Reflection nebulae themselves do not emit significant amounts of visible light, but are near stars and reflect light from them. Similar nebulae not illuminated by stars do not exhibit visible radiation, but may be detected as opaque clouds blocking light from luminous objects behind them; they are called dark nebulae.
Although these nebulae have different visibility at optical wavelengths, they are all bright sources of infrared emission, chiefly from dust within the nebulae.
Planetary nebulae
Planetary nebulae are the remnants of the final stages of stellar evolution for mid-mass stars (varying in size between 0.5-~8 solar masses). Evolved asymptotic giant branch stars expel their outer layers outwards due to strong stellar winds, thus forming gaseous shells while leaving behind the star's core in the form of a white dwarf. Radiation from the hot white dwarf excites the expelled gases, producing emission nebulae with spectra similar to those of emission nebulae found in star formation regions. They are H II regions, because mostly hydrogen is ionized, but planetary are denser and more compact than nebulae found in star formation regions.
Planetary nebulae were given their name by the first astronomical observers who were initially unable to distinguish them from planets, which were of more interest to them. The Sun is expected to spawn a planetary nebula about 12 billion years after its formation.
Protoplanetary nebulae
Supernova remnants
A supernova occurs when a high-mass star reaches the end of its life. When nuclear fusion in the core of the star stops, the star collapses. The gas falling inward either rebounds or gets so strongly heated that it expands outwards from the core, thus causing the star to explode. The expanding shell of gas forms a supernova remnant, a special diffuse nebula. Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emission. This emission originates from high-velocity electrons oscillating within magnetic fields.
Examples
Ant Nebula
Barnard's Loop
Boomerang Nebula
Cat's Eye Nebula
Crab Nebula
Eagle Nebula
Eskimo Nebula
Carina Nebula
Fox Fur Nebula
Helix Nebula
Horsehead Nebula
Engraved Hourglass Nebula
Lagoon Nebula
Orion Nebula
Pelican Nebula
Red Square Nebula
Ring Nebula
Rosette Nebula
Tarantula Nebula
Waterfall Nebula
Catalogs
Gum catalog (emission nebulae)
RCW Catalogue (emission nebulae)
Sharpless catalog (emission nebulae)
Messier Catalogue
Caldwell Catalogue
Abell Catalog of Planetary Nebulae
Barnard Catalogue (dark nebulae)
Lynds' Catalogue of Bright Nebulae
Lynds' Catalogue of Dark Nebulae
See also
H I region
H II region
List of largest nebulae
List of diffuse nebulae
Lists of nebulae
Molecular cloud
Magellanic Clouds
Messier object
Nebular hypothesis
Orion molecular cloud complex
Timeline of knowledge about the interstellar and intergalactic medium
References
External links
Nebulae, SEDS Messier Pages
Fusedweb.pppl.gov
Historical pictures of nebulae, digital library of Paris Observatory
Space plasmas
Concepts in astronomy
Interstellar media | Nebula | [
"Physics",
"Astronomy"
] | 2,510 | [
"Space plasmas",
"Interstellar media",
"Outer space",
"Concepts in astronomy",
"Nebulae",
"Astrophysics",
"Astronomical objects"
] |
21,723 | https://en.wikipedia.org/wiki/Nonlinear%20optics | Nonlinear optics (NLO) is the branch of optics that describes the behaviour of light in nonlinear media, that is, media in which the polarization density P responds non-linearly to the electric field E of the light. The non-linearity is typically observed only at very high light intensities (when the electric field of the light is >108 V/m and thus comparable to the atomic electric field of ~1011 V/m) such as those provided by lasers. Above the Schwinger limit, the vacuum itself is expected to become nonlinear. In nonlinear optics, the superposition principle no longer holds.
History
The first nonlinear optical effect to be predicted was two-photon absorption, by Maria Goeppert Mayer for her PhD in 1931, but it remained an unexplored theoretical curiosity until 1961 and the almost simultaneous observation of two-photon absorption at Bell Labs
and the discovery of second-harmonic generation by Peter Franken et al. at University of Michigan, both shortly after the construction of the first laser by Theodore Maiman. However, some nonlinear effects were discovered before the development of the laser. The theoretical basis for many nonlinear processes was first described in Bloembergen's monograph "Nonlinear Optics".
Nonlinear optical processes
Nonlinear optics explains nonlinear response of properties such as frequency, polarization, phase or path of incident light. These nonlinear interactions give rise to a host of optical phenomena:
Frequency-mixing processes
Second-harmonic generation (SHG), or frequency doubling, generation of light with a doubled frequency (half the wavelength), two photons are destroyed, creating a single photon at two times the frequency.
Third-harmonic generation (THG), generation of light with a tripled frequency (one-third the wavelength), three photons are destroyed, creating a single photon at three times the frequency.
High-harmonic generation (HHG), generation of light with frequencies much greater than the original (typically 100 to 1000 times greater).
Sum-frequency generation (SFG), generation of light with a frequency that is the sum of two other frequencies (SHG is a special case of this).
Difference-frequency generation (DFG), generation of light with a frequency that is the difference between two other frequencies.
Optical parametric amplification (OPA), amplification of a signal input in the presence of a higher-frequency pump wave, at the same time generating an idler wave (can be considered as DFG).
Optical parametric oscillation (OPO), generation of a signal and idler wave using a parametric amplifier in a resonator (with no signal input).
Optical parametric generation (OPG), like parametric oscillation but without a resonator, using a very high gain instead.
Half-harmonic generation, the special case of OPO or OPG when the signal and idler degenerate in one single frequency,
Spontaneous parametric down-conversion (SPDC), the amplification of the vacuum fluctuations in the low-gain regime.
Optical rectification (OR), generation of quasi-static electric fields.
Nonlinear light-matter interaction with free electrons and plasmas.
Other nonlinear processes
Optical Kerr effect, intensity-dependent refractive index (a effect).
Self-focusing, an effect due to the optical Kerr effect (and possibly higher-order nonlinearities) caused by the spatial variation in the intensity creating a spatial variation in the refractive index.
Kerr-lens modelocking (KLM), the use of self-focusing as a mechanism to mode-lock laser.
Self-phase modulation (SPM), an effect due to the optical Kerr effect (and possibly higher-order nonlinearities) caused by the temporal variation in the intensity creating a temporal variation in the refractive index.
Optical solitons, an equilibrium solution for either an optical pulse (temporal soliton) or spatial mode (spatial soliton) that does not change during propagation due to a balance between dispersion and the Kerr effect (e.g. self-phase modulation for temporal and self-focusing for spatial solitons).
Self-diffraction, splitting of beams in a multi-wave mixing process with potential energy transfer.
Cross-phase modulation (XPM), where one wavelength of light can affect the phase of another wavelength of light through the optical Kerr effect.
Four-wave mixing (FWM), can also arise from other nonlinearities.
Cross-polarized wave generation (XPW), a effect in which a wave with polarization vector perpendicular to the input one is generated.
Modulational instability.
Raman amplification
Optical phase conjugation.
Stimulated Brillouin scattering, interaction of photons with acoustic phonons
Multi-photon absorption, simultaneous absorption of two or more photons, transferring the energy to a single electron.
Multiple photoionisation, near-simultaneous removal of many bound electrons by one photon.
Chaos in optical systems.
Related processes
In these processes, the medium has a linear response to the light, but the properties of the medium are affected by other causes:
Pockels effect, the refractive index is affected by a static electric field; used in electro-optic modulators.
Acousto-optics, the refractive index is affected by acoustic waves (ultrasound); used in acousto-optic modulators.
Raman scattering, interaction of photons with optical phonons.
Parametric processes
Nonlinear effects fall into two qualitatively different categories, parametric and non-parametric effects. A parametric non-linearity
is an interaction in which the quantum state of the nonlinear material is not changed by the interaction with the optical field. As a consequence of this, the process is "instantaneous". Energy and momentum are conserved in the optical field, making phase matching important and polarization-dependent.
Theory
Parametric and "instantaneous" (i.e. material must be lossless and dispersionless through the Kramers–Kronig relations) nonlinear optical phenomena, in which the optical fields are not too large, can be described by a Taylor series expansion of the dielectric polarization density (electric dipole moment per unit volume) P(t) at time t in terms of the electric field E(t):
where the coefficients χ(n) are the n-th-order susceptibilities of the medium, and the presence of such a term is generally referred to as an n-th-order nonlinearity. Note that the polarization density P(t) and electrical field E(t) are considered as scalar for simplicity. In general, χ(n) is an (n + 1)-th-rank tensor representing both the polarization-dependent nature of the parametric interaction and the symmetries (or lack) of the nonlinear material.
Wave equation in a nonlinear material
Central to the study of electromagnetic waves is the wave equation. Starting with Maxwell's equations in an isotropic space, containing no free charge, it can be shown that
where PNL is the nonlinear part of the polarization density, and n is the refractive index, which comes from the linear term in P.
Note that one can normally use the vector identity
and Gauss's law (assuming no free charges, ),
to obtain the more familiar wave equation
For a nonlinear medium, Gauss's law does not imply that the identity
is true in general, even for an isotropic medium. However, even when this term is not identically 0, it is often negligibly small and thus in practice is usually ignored, giving us the standard nonlinear wave equation:
Nonlinearities as a wave-mixing process
The nonlinear wave equation is an inhomogeneous differential equation. The general solution comes from the study of ordinary differential equations and can be obtained by the use of a Green's function. Physically one gets the normal electromagnetic wave solutions to the homogeneous part of the wave equation:
and the inhomogeneous term
acts as a driver/source of the electromagnetic waves. One of the consequences of this is a nonlinear interaction that results in energy being mixed or coupled between different frequencies, which is often called a "wave mixing".
In general, an n-th order nonlinearity will lead to (n + 1)-wave mixing. As an example, if we consider only a second-order nonlinearity (three-wave mixing), then the polarization P takes the form
If we assume that E(t) is made up of two components at frequencies ω1 and ω2, we can write E(t) as
and using Euler's formula to convert to exponentials,
where "c.c." stands for complex conjugate. Plugging this into the expression for P gives
which has frequency components at 2ω1, 2ω2, ω1 + ω2, ω1 − ω2, and 0. These three-wave mixing processes correspond to the nonlinear effects known as second-harmonic generation, sum-frequency generation, difference-frequency generation and optical rectification respectively.
Note: Parametric generation and amplification is a variation of difference-frequency generation, where the lower frequency of one of the two generating fields is much weaker (parametric amplification) or completely absent (parametric generation). In the latter case, the fundamental quantum-mechanical uncertainty in the electric field initiates the process.
Phase matching
The above ignores the position dependence of the electrical fields. In a typical situation, the electrical fields are traveling waves described by
at position , with the wave vector , where is the velocity of light in vacuum, and is the index of refraction of the medium at angular frequency . Thus, the second-order polarization at angular frequency is
At each position within the nonlinear medium, the oscillating second-order polarization radiates at angular frequency and a corresponding wave vector . Constructive interference, and therefore a high-intensity field, will occur only if
The above equation is known as the phase-matching condition. Typically, three-wave mixing is done in a birefringent crystalline material, where the refractive index depends on the polarization and direction of the light that passes through. The polarizations of the fields and the orientation of the crystal are chosen such that the phase-matching condition is fulfilled. This phase-matching technique is called angle tuning. Typically a crystal has three axes, one or two of which have a different refractive index than the other one(s). Uniaxial crystals, for example, have a single preferred axis, called the extraordinary (e) axis, while the other two are ordinary axes (o) (see crystal optics). There are several schemes of choosing the polarizations for this crystal type. If the signal and idler have the same polarization, it is called "type-I phase matching", and if their polarizations are perpendicular, it is called "type-II phase matching". However, other conventions exist that specify further which frequency has what polarization relative to the crystal axis. These types are listed below, with the convention that the signal wavelength is shorter than the idler wavelength.
Most common nonlinear crystals are negative uniaxial, which means that the e axis has a smaller refractive index than the o axes. In those crystals, type-I and -II phase matching are usually the most suitable schemes. In positive uniaxial crystals, types VII and VIII are more suitable. Types II and III are essentially equivalent, except that the names of signal and idler are swapped when the signal has a longer wavelength than the idler. For this reason, they are sometimes called IIA and IIB. The type numbers V–VIII are less common than I and II and variants.
One undesirable effect of angle tuning is that the optical frequencies involved do not propagate collinearly with each other. This is due to the fact that the extraordinary wave propagating through a birefringent crystal possesses a Poynting vector that is not parallel to the propagation vector. This would lead to beam walk-off, which limits the nonlinear optical conversion efficiency. Two other methods of phase matching avoid beam walk-off by forcing all frequencies to propagate at a 90° with respect to the optical axis of the crystal. These methods are called temperature tuning and quasi-phase-matching.
Temperature tuning is used when the pump (laser) frequency polarization is orthogonal to the signal and idler frequency polarization. The birefringence in some crystals, in particular lithium niobate is highly temperature-dependent. The crystal temperature is controlled to achieve phase-matching conditions.
The other method is quasi-phase-matching. In this method the frequencies involved are not constantly locked in phase with each other, instead the crystal axis is flipped at a regular interval Λ, typically 15 micrometres in length. Hence, these crystals are called periodically poled. This results in the polarization response of the crystal to be shifted back in phase with the pump beam by reversing the nonlinear susceptibility. This allows net positive energy flow from the pump into the signal and idler frequencies. In this case, the crystal itself provides the additional wavevector k = 2π/Λ (and hence momentum) to satisfy the phase-matching condition. Quasi-phase-matching can be expanded to chirped gratings to get more bandwidth and to shape an SHG pulse like it is done in a dazzler. SHG of a pump and self-phase modulation (emulated by second-order processes) of the signal and an optical parametric amplifier can be integrated monolithically.
Higher-order frequency mixing
The above holds for processes. It can be extended for processes where is nonzero, something that is generally true in any medium without any symmetry restrictions; in particular resonantly enhanced sum or difference frequency mixing in gasses is frequently used for extreme or "vacuum" ultra-violet light generation. In common scenarios, such as mixing in dilute gases, the non-linearity is weak and so the light beams are focused which, unlike the plane wave approximation used above, introduces a pi phase shift on each light beam, complicating the phase-matching requirements. Conveniently, difference frequency mixing with cancels this focal phase shift and often has a nearly self-canceling overall phase-matching condition, which relatively simplifies broad wavelength tuning compared to sum frequency generation. In all four frequencies are mixing simultaneously, as opposed to sequential mixing via two processes.
The Kerr effect can be described as a as well. At high peak powers the Kerr effect can cause filamentation of light in air, in which the light travels without dispersion or divergence in a self-generated waveguide. At even high intensities the Taylor series, which led the domination of the lower orders, does not converge anymore and instead a time based model is used. When a noble gas atom is hit by an intense laser pulse, which has an electric field strength comparable to the Coulomb field of the atom, the outermost electron may be ionized from the atom. Once freed, the electron can be accelerated by the electric field of the light, first moving away from the ion, then back toward it as the field changes direction. The electron may then recombine with the ion, releasing its energy in the form of a photon. The light is emitted at every peak of the laser light field which is intense enough, producing a series of attosecond light flashes. The photon energies generated by this process can extend past the 800th harmonic order up to a few KeV. This is called high-order harmonic generation. The laser must be linearly polarized, so that the electron returns to the vicinity of the parent ion. High-order harmonic generation has been observed in noble gas jets, cells, and gas-filled capillary waveguides.
Example uses
Frequency doubling
One of the most commonly used frequency-mixing processes is frequency doubling, or second-harmonic generation. With this technique, the 1064 nm output from Nd:YAG lasers or the 800 nm output from Ti:sapphire lasers can be converted to visible light, with wavelengths of 532 nm (green) or 400 nm (violet) respectively.
Practically, frequency doubling is carried out by placing a nonlinear medium in a laser beam. While there are many types of nonlinear media, the most common media are crystals. Commonly used crystals are BBO (β-barium borate), KDP (potassium dihydrogen phosphate), KTP (potassium titanyl phosphate), and lithium niobate. These crystals have the necessary properties of being strongly birefringent (necessary to obtain phase matching, see below), having a specific crystal symmetry, being transparent for both the impinging laser light and the frequency-doubled wavelength, and having high damage thresholds, which makes them resistant against the high-intensity laser light.
Optical phase conjugation
It is possible, using nonlinear optical processes, to exactly reverse the propagation direction and phase variation of a beam of light. The reversed beam is called a conjugate beam, and thus the technique is known as optical phase conjugation (also called time reversal, wavefront reversal and is significantly different from retroreflection).
A device producing the phase-conjugation effect is known as a phase-conjugate mirror (PCM).
Principles
One can interpret optical phase conjugation as being analogous to a real-time holographic process. In this case, the interacting beams simultaneously interact in a nonlinear optical material to form a dynamic hologram (two of the three input beams), or real-time diffraction pattern, in the material. The third incident beam diffracts at this dynamic hologram, and, in the process, reads out the phase-conjugate wave. In effect, all three incident beams interact (essentially) simultaneously to form several real-time holograms, resulting in a set of diffracted output waves that phase up as the "time-reversed" beam. In the language of nonlinear optics, the interacting beams result in a nonlinear polarization within the material, which coherently radiates to form the phase-conjugate wave.
Reversal of wavefront means a perfect reversal of photons' linear momentum and angular momentum. The reversal of angular momentum means reversal of both polarization state and orbital angular momentum. Reversal of orbital angular momentum of optical vortex is due to the perfect match of helical phase profiles of the incident and reflected beams. Optical phase conjugation is implemented via stimulated Brillouin scattering, four-wave mixing, three-wave mixing, static linear holograms and some other tools.
The most common way of producing optical phase conjugation is to use a four-wave mixing technique, though it is also possible to use processes such as stimulated Brillouin scattering.
Four-wave mixing technique
For the four-wave mixing technique, we can describe four beams (j = 1, 2, 3, 4) with electric fields:
where Ej are the electric field amplitudes. Ξ1 and Ξ2 are known as the two pump waves, with Ξ3 being the signal wave, and Ξ4 being the generated conjugate wave.
If the pump waves and the signal wave are superimposed in a medium with a non-zero χ(3), this produces a nonlinear polarization field:
resulting in generation of waves with frequencies given by ω = ±ω1 ± ω2 ± ω3 in addition to third-harmonic generation waves with ω = 3ω1, 3ω2, 3ω3.
As above, the phase-matching condition determines which of these waves is the dominant. By choosing conditions such that ω = ω1 + ω2 − ω3 and k = k1 + k2 − k3, this gives a polarization field:
This is the generating field for the phase-conjugate beam, Ξ4. Its direction is given by k4 = k1 + k2 − k3, and so if the two pump beams are counterpropagating (k1 = −k2), then the conjugate and signal beams propagate in opposite directions (k4 = −k3). This results in the retroreflecting property of the effect.
Further, it can be shown that for a medium with refractive index n and a beam interaction length l, the electric field amplitude of the conjugate beam is approximated by
where c is the speed of light. If the pump beams E1 and E2 are plane (counterpropagating) waves, then
that is, the generated beam amplitude is the complex conjugate of the signal beam amplitude. Since the imaginary part of the amplitude contains the phase of the beam, this results in the reversal of phase property of the effect.
Note that the constant of proportionality between the signal and conjugate beams can be greater than 1. This is effectively a mirror with a reflection coefficient greater than 100%, producing an amplified reflection. The power for this comes from the two pump beams, which are depleted by the process.
The frequency of the conjugate wave can be different from that of the signal wave. If the pump waves are of frequency ω1 = ω2 = ω, and the signal wave is higher in frequency such that ω3 = ω + Δω, then the conjugate wave is of frequency ω4 = ω − Δω. This is known as frequency flipping.
Angular and linear momenta in optical phase conjugation
Classical picture
In classical Maxwell electrodynamics a phase-conjugating mirror performs reversal of the Poynting vector:
("in" means incident field, "out" means reflected field) where
which is a linear momentum density of electromagnetic field.
In the same way a phase-conjugated wave has an opposite angular momentum density vector
with respect to incident field:
The above identities are valid locally, i.e. in each space point in a given moment for an ideal phase-conjugating mirror.
Quantum picture
In quantum electrodynamics the photon with energy also possesses linear momentum and angular momentum, whose projection on propagation axis is , where is topological charge of photon, or winding number, is propagation axis. The angular momentum projection on propagation axis has discrete values .
In quantum electrodynamics the interpretation of phase conjugation is much simpler compared to classical electrodynamics. The photon reflected from phase conjugating-mirror (out) has opposite directions of linear and angular momenta with respect to incident photon (in):
Nonlinear optical pattern formation
Optical fields transmitted through nonlinear Kerr media can also display pattern formation owing to the nonlinear medium amplifying spatial and temporal noise. The effect is referred to as optical modulation instability. This has been observed both in photo-refractive, photonic lattices, as well as photo-reactive systems. In the latter case, optical nonlinearity is afforded by reaction-induced increases in refractive index. Examples of pattern formation are spatial solitons and vortex lattices in framework of nonlinear Schrödinger equation.
Molecular nonlinear optics
The early studies of nonlinear optics and materials focused on the inorganic solids. With the development of nonlinear optics, molecular optical properties were investigated, forming molecular nonlinear optics. The traditional approaches used in the past to enhance nonlinearities include extending chromophore π-systems, adjusting bond length alternation, inducing intramolecular charge transfer, extending conjugation in 2D, and engineering multipolar charge distributions. Recently, many novel directions were proposed for enhanced nonlinearity and light manipulation, including twisted chromophores, combining rich density of states with bond alternation, microscopic cascading of second-order nonlinearity, etc. Due to the distinguished advantages, molecular nonlinear optics have been widely used in the biophotonics field, including bioimaging, phototherapy, biosensing, etc.
Connecting bulk properties to microscopic properties
Molecular nonlinear optics relate optical properties of bulk matter to their microscopic molecular properties. Just as the polarizability can be described as a Taylor series expansion, one can expand the induced dipole moment in powers of the electric field: , where μ is the polarizability, α is the first hyperpolarizability, β is the second hyperpolarizability, and so on.
Novel Nonlinear Media
Certain molecular materials have the ability to be optimized for their optical nonlinearity at the microscopic and bulk levels. Due to the delocalization of electrons in π bonds electrons are more easily responsive to applied optical fields and tend to produce larger linear and nonlinear optical responses than those in single (𝜎) bonds. In these systems linear response scales with the length of the conjugated pi system, while nonlinear response scales even more rapidly.
One of the many applications of molecular nonlinear optics is the use in nonlinear bioimaging. These nonlinear materials, like multi-photon chromophores, are used as biomarkers for two-photon spectroscopy, in which the attenuation of incident light intensity as it passes through the sample is written as .
where N is the number of particles per unit volume, I is intensity of light, and δ is the two photon absorption cross section. The resulting signal adopts a Lorentzian lineshape with a cross-section proportional to the difference in dipole moments of ground and final states.
Similar highly conjugated chromophores with strong donor-acceptor characteristics are used due to their large difference in the dipole moments, and current efforts in extending their pi-conjugated systems to enhance their nonlinear optical properties are being made.
Common second-harmonic-generating (SHG) materials
Ordered by pump wavelength:
800 nm: BBO
806 nm: lithium iodate (LiIO3)
860 nm: potassium niobate (KNbO3)
980 nm: KNbO3
1064 nm: monopotassium phosphate (KH2PO4, KDP), lithium triborate (LBO) and β-barium borate (BBO)
1300 nm: gallium selenide (GaSe)
1319 nm: KNbO3, BBO, KDP, potassium titanyl phosphate (KTP), lithium niobate (LiNbO3), LiIO3, and ammonium dihydrogen phosphate (ADP)
1550 nm: potassium titanyl phosphate (KTP), lithium niobate (LiNbO3)
See also
Born–Infeld model
Filament propagation
:Category:Nonlinear optical materials
Further reading
Encyclopedia of laser physics and technology , with content on nonlinear optics, by Rüdiger Paschotta
An Intuitive Explanation of Phase Conjugation
SNLO - Nonlinear Optics Design Software
Robert Boyd plenary presentation: Quantum Nonlinear Optics: Nonlinear Optics Meets the Quantum World SPIE Newsroom
References
Optics | Nonlinear optics | [
"Physics",
"Chemistry"
] | 5,528 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
21,957 | https://en.wikipedia.org/wiki/Nuclear%20pore%20complex | The nuclear pore complex (NPC), is a large protein complex giving rise to the nuclear pore. Nuclear pores are found in the nuclear envelope that surrounds the cell nucleus in eukaryotic cells. The nuclear envelope is studded by a great number of nuclear pores that give access to various molecules, to and from the nucleoplasm and the cytoplasm.
Small molecules can diffuse easily but other larger molecules need to be transported across.
The nuclear pore complex consists predominantly of proteins known as nucleoporins (Nups). Each human NPC is composed of about 1,000 individual protein molecules, from an evolutionarily conserved set of 35 distinct nucleoporins. In 2022 around 90% of the structure of the human NPC was elucidated in an open and a closed conformation, and published in a special issue of Science, featured on the cover. In 2024 the structure of the nuclear basket was solved, finalising the structure of NPC.
About half of the nucleoporins encompass solenoid protein domains, such as alpha solenoids or beta-propeller folds, and occasionally both as separate structural domains. Conversely, the remaining nucleoporins exhibit characteristics of "natively unfolded" or intrinsically disordered proteins, characterized by high flexibility and a lack of ordered tertiary structure. These disordered proteins, referred to as FG nucleoporins (FG-Nups), contain multiple phenylalanine–glycine repeats (FG repeats) in their amino acid sequences. FG-Nups is one of three main types of nucleoporins found in the NPC. The other two are the transmembrane Nups and the scaffold Nups. The transmembrane Nups are made up of transmembrane alpha helices and play a vital part in anchoring the NPC to the nuclear envelope. The scaffold Nups are made up of alpha solenoid and beta-propeller folds, and create the structural framework of NPCs.
The principal function of nuclear pore complexes is to facilitate selective membrane transport of various molecules across the nuclear envelope. This includes the transportation of RNA and ribosomal proteins from the nucleus to the cytoplasm, as well as proteins (such as DNA polymerase and lamins), carbohydrates, signaling molecules, and lipids moving into the nucleus. Notably, the nuclear pore complex (NPC) can actively mediate up to 1000 translocations per complex per second.
Evolutionary conserved features in sequences that code for nucleoporins regulate molecular transport through the nuclear pore. Nucleoporin-mediated transport does not entail direct energy expenditure but instead relies on concentration gradients associated with the RAN cycle (Ras-related nuclear protein cycle).
The count of nuclear pore complexes varies across cell types and different stages of the cell's life cycle, with approximately 1,000 NPCs typically found in vertebrate cells. The human nuclear pore complex (hNPC) is a substantial structure, with a molecular weight of 120 megadaltons (MDa). Each NPC comprises eight protein subunits encircling the actual pore, forming the outer ring. Additionally, these subunits project a spoke-shaped protein over the pore channel. The central region of the pore may exhibit a plug-like structure; however, its precise nature remains unknown, and it is yet undetermined whether it represents an actual plug or merely cargo transiently caught in transit.
Structure
The nuclear pore complex (NPC) is a crucial cellular structure with a diameter of approximately 120 nanometers in vertebrates. Its channel varies from 5.2 nanometers in humans to 10.7 nm in the frog Xenopus laevis, with a depth of roughly 45 nm. Additionally, mRNA, being single-stranded, has a thickness ranging from 0.5 to 1 nm. The mammalian NPC has a molecular mass of about 124 megadaltons (MDa), comprising approximately 30 distinct protein components, each in multiple copies. The mammalian NPCs contain about 800 nucleoporins each that are organized into distinct NPC subcomplexes. Conversely, the yeast Saccharomyces cerevisiae possesses a smaller mass, estimated at only 66 MDa.
Nuclear transport
The nuclear pore complex (NPC) serves as a highly regulated gateway for the transport of molecules between the nucleus and the cytoplasm. This intricate system enables the selective passage for molecules including proteins, RNA, and signaling molecules, ensuring proper cellular function and homeostasis. Small molecules such as proteins water and ions can diffuse through NPCs, but cargoes (>40 KDa) such as RNA and protein require the participation of soluble transport receptors.
The largest family of nuclear transport receptors are karyopherin's, these are also knowing as importins or exportins. These are a superfamily of nuclear transport receptors that facilitate the translocation of proteins, RNAs, and ribonuclear particles across the NPC in a Ran GTP hydrolase-dependent process. This family is further subdivided to the karyopherin-α and the karyopherin-β subfamilies. Other nuclear transport receptors include NTF2 and some NTF2-like proteins.
Three models have been suggested to explain the translocation mechanism:
Affinity gradients along the central plug
Brownian affinity gating
Selective phase
Import of proteins
Nuclear proteins are synthesized in the cytoplasm and need to be imported through the NPCs into the nucleus. Import can be directed by various signals, of which nuclear localization signal (NLS) are best characterized. Several NLS sequences are known, generally containing a conserved sequence with basic residues such as PKKKRKV. Any material with an NLS will be taken up by importins to the nucleus.
Importation begins with Importin-α binding to the NLS sequence of cargo proteins, forming a complex. Importin-β then attaches to Importin-α, facilitating transport towards the NPC.
As the complex reaches the NPC, it diffuses through the pore without the need for additional energy. Upon entry into nucleus, RanGTP binds to Importin-β and displaces it from the complex. Then the cellular apoptosis susceptibility protein (CAS), an exportin which in the nucleus is bound to RanGTP, displaces Importin-α from the cargo. The NLS-protein is thus free in the nucleoplasm. The Importinβ-RanGTP and Importinα-CAS-RanGTP complex diffuses back to the cytoplasm where GTPs are hydrolyzed to GDP leading to the release of Importinβ and Importinα which become available for a new NLS-protein import round.
While translocation through the NPC is not energy-dependent, the overall import cycle needs the hydrolysis of two GTPs molecules, making it an active transport process. The import cycle is powered by the nucleo-cytoplasmic RanGTP gradient. This gradient arises from the exclusive nuclear localization of RanGEFs, proteins that exchange GDP to GTP on Ran molecules. Thus, there is an elevated RanGTP concentration in the nucleus compared to the cytoplasm.
Export of proteins
In addition to nuclear import, certain molecules and macromolecular complexes, such as ribosome subunits and messenger RNAs, require export from the nucleus to the cytoplasm. This export process mirrors the import mechanism in complexity and importance.
In a classical export scenario, proteins with a nuclear export sequence (NES) form a heterotrimeric complex with an exportin and RanGTP within the nucleus. Example of such an exportin is CRM1. This complex subsequently translocate to the cytoplasm, where GTP hydrolysis occurs, releasing the NES-containing protein. The resulting CRM1-RanGDP complex returns to the nucleus, where RanGEFs catalyze the exchange of GDP for GTP on Ran, replenishing the system's energy source. This entire process is energy-dependent and consumes one GTP molecule. Notably, the export activity mediated by CRM1 can be inhibited by compounds like Leptomycin B
Export of RNA
Different export pathways through the NPC for various RNA classes. RNA export is signal-mediated, with nuclear export signals (NES) present in RNA-binding proteins, except for tRNA which lacks an adapter. It is notable that all viral RNAs and cellular RNAs (tRNA, rRNA, U snRNA, microRNA) except mRNA are dependent on RanGTP. Conserved mRNA export factors are necessary for mRNA nuclear export. Export factors are Mex67/Tap (large subunit) and Mtr2/p15 (small subunit).
In highest eukaryotes, mRNA export is believed to be spicling-dependent. Splicing recruits the TREX protein complex to spliced messages, serving as an adapter for TAP, a low-affinity RNA-binding protein However, there are alternative mRNA export pathways that do not rely on splicing for specialized messages such as histones. Recent work also suggest an interplay between splicing-dependent export and one of these alternative mRNA export pathways for secretory and mitochondrial transcripts.
Assembly of the NPC
Since the NPC regulates genome access, its presence in significant quantities during cell cycle stages characterized by high transcription rates is crucial. For example, cycling mammalian and yeast cells double the amount of NPC in the nucleus between the G1 and G2 phase. Similarly, oocytes accumulate abundant NPCs in anticipation of the rapid mitotic activity during early development. Moreover, interphase cells must maintain NPC generation to sustain consistent NPC levels, as some may incur damage. Furthermore, certain cells can even increase the NPC numbers due to increased transcriptional demand.
Theories of assembly
There are several theories as to how NPCs are assembled. As the immunodepletion of certain protein complexes, such as the Nup 107–160 complex, leads to the formation of poreless nuclei, it seems likely that the Nup complexes are involved in fusing the outer membrane of the nuclear envelope with the inner and not that the fusing of the membrane begins the formation of the pore. There are several ways that this could lead to the formation of the full NPC.
One possibility is that as a protein complex it binds to the chromatin. It is then inserted into the double membrane close to the chromatin. This, in turn, leads to the fusing of that membrane. Around this protein complex others eventually bind forming the NPC. This method is possible during every phase of mitosis as the double membrane is present around the chromatin before the membrane fusion proteins complex can insert. Post mitotic cells could form a membrane first with pores being inserted into after formation.
Another model for the formation of the NPC is the production of a prepore as a start as opposed to a single protein complex. This prepore would form when several Nup complexes come together and bind to the chromatin. This would have the double membrane form around it in during mitotic reassembly. Possible prepore structures have been observed on chromatin before nuclear envelope (NE) formation using electron microscopy. During the interphase of the cell cycle the formation of the prepore would happen within the nucleus, each component being transported in through existing NPCs. These Nups would bind to an importin, once formed, preventing the assembly of a prepore in the cytoplasm. Once transported into the nucleus Ran GTP would bind to the importin and cause it to release the cargo. This Nup would be free to form a prepore. The binding of importins has at least been shown to bring Nup 107 and the Nup 153 nucleoporins into the nucleus. NPC assembly is a very rapid process yet defined intermediate states occur which leads to the idea that this assembly occurs in a stepwise fashion.
Disassembly
During mitosis the NPC appears to disassemble in stages, except in lower eukaryotes like yeast, where NPC disassembly does not happen during mitosis. Peripheral nucleoporins, such as the Nup153 Nup98 and Nup214, disassociate from the NPC. The rest, which can be considered a scaffold proteins remain stable, as cylindrical ring complexes within the nuclear envelope. This disassembly of the NPC peripheral groups is largely thought to be phosphate driven, as several of these nucleoporins are phosphorylated during the stages of mitosis. However, the enzyme involved in the phosphorylation is unknown in vivo. In metazoans (which undergo open mitosis) the NE degrades quickly after the loss of the peripheral Nups. The reason for this may be due to the change in the NPC's architecture. This change may make the NPC more permeable to enzymes involved in the degradation of the NE such as cytoplasmic tubulin, as well as allowing the entry of key mitotic regulator proteins. In organisms that undergo a semi-open mitosis such as the filamentous fungus Aspergillus nidulans, 14 out of the 30 nucleoporins disassemble from the core scaffold structure, driven by the activation of the NIMA and Cdk1 kinases that phosphorylate nucleoporins and open nuclear pores thereby widening the nuclear pore and allowing the entry of mitotic regulators.
Preservation of integrity
In fungi undergoing closed mitosis, where the nucleus remains intact, changes in the permeability barrier of the nuclear envelope are attributed to alterations within the NPC. These changes facilitate the entry of mitotic regulators into the nucleus. Studies in Aspergillys nidulans suggest that the NPC composition appears to be effeveted by the mitotiv kinase NIMA. NIMA potentially phosphorylates nucleoporins Nup98 and Gle2/Rae1, leading to NPC remodeling. This remodeling allows the nuclear entry of the protein complex cdc2/cyclinB and various other proteins, including soluble tubulin. The NPC scaffold remains intact throughout the whole closed mitosis. This seems to preserver the integrity of the nuclear envelope.
References
External links
Nuclear Pore Complex animations
Nuclear Pore Complex illustrations
3D electron microscopy structures of the NPC and constituent proteins from the EM Data Bank(EMDB)
NCDIR - National Center for the Dynamic Interactome
Cell nucleus
Membrane biology
Nuclear pore complex | Nuclear pore complex | [
"Chemistry"
] | 3,091 | [
"Membrane biology",
"Molecular biology"
] |
21,961 | https://en.wikipedia.org/wiki/Nucleon | In physics and chemistry, a nucleon is either a proton or a neutron, considered in its role as a component of an atomic nucleus. The number of nucleons in a nucleus defines the atom's mass number (nucleon number).
Until the 1960s, nucleons were thought to be elementary particles, not made up of smaller parts. Now they are understood as composite particles, made of three quarks bound together by the strong interaction. The interaction between two or more nucleons is called internucleon interaction or nuclear force, which is also ultimately caused by the strong interaction. (Before the discovery of quarks, the term "strong interaction" referred to just internucleon interactions.)
Nucleons sit at the boundary where particle physics and nuclear physics overlap. Particle physics, particularly quantum chromodynamics, provides the fundamental equations that describe the properties of quarks and of the strong interaction. These equations describe quantitatively how quarks can bind together into protons and neutrons (and all the other hadrons). However, when multiple nucleons are assembled into an atomic nucleus (nuclide), these fundamental equations become too difficult to solve directly (see lattice QCD). Instead, nuclides are studied within nuclear physics, which studies nucleons and their interactions by approximations and models, such as the nuclear shell model. These models can successfully describe nuclide properties, as for example, whether or not a particular nuclide undergoes radioactive decay.
The proton and neutron are in a scheme of categories being at once fermions, hadrons and baryons. The proton carries a positive net charge, and the neutron carries a zero net charge; the proton's mass is only about 0.13% less than the neutron's. Thus, they can be viewed as two states of the same nucleon, and together form an isospin doublet (). In isospin space, neutrons can be transformed into protons and conversely by SU(2) symmetries. These nucleons are acted upon equally by the strong interaction, which is invariant under rotation in isospin space. According to Noether's theorem, isospin is conserved with respect to the strong interaction.
Overview
Properties
Protons and neutrons are best known in their role as nucleons, i.e., as the components of atomic nuclei, but they also exist as free particles. Free neutrons are unstable, with a half-life of around 13 minutes, but they have important applications (see neutron radiation and neutron scattering). Protons not bound to other nucleons are the nuclei of hydrogen atoms when bound with an electron or if not bound to anything are ions or cosmic rays.
Both the proton and the neutron are composite particles, meaning that each is composed of smaller parts, namely three quarks each; although once thought to be so, neither is an elementary particle. A proton is composed of two up quarks and one down quark, while the neutron has one up quark and two down quarks. Quarks are held together by the strong force, or equivalently, by gluons, which mediate the strong force at the quark level.
An up quark has electric charge e, and a down quark has charge e, so the summed electric charges of proton and neutron are +e and 0, respectively. Thus, the neutron has a charge of 0 (zero), and therefore is electrically neutral; indeed, the term "neutron" comes from the fact that a neutron is electrically neutral.
The masses of the proton and neutron are similar: for the proton it is (), while for the neutron it is (); the neutron is roughly 0.13% heavier. The similarity in mass can be explained roughly by the slight difference in masses of up quarks and down quarks composing the nucleons. However, a detailed description remains an unsolved problem in particle physics.
The spin of the nucleon is , which means that they are fermions and, like electrons, are subject to the Pauli exclusion principle: no more than one nucleon, e.g. in an atomic nucleus, may occupy the same quantum state.
The isospin and spin quantum numbers of the nucleon have two states each, resulting in four combinations in total. An alpha particle is composed of four nucleons occupying all four combinations, namely, it has two protons (having opposite spin) and two neutrons (also having opposite spin), and its net nuclear spin is zero. In larger nuclei constituent nucleons, by Pauli exclusion, are compelled to have relative motion, which may also contribute to nuclear spin via the orbital quantum number. They spread out into nuclear shells analogous to electron shells known from chemistry.
Both the proton and neutron have magnetic moments, though the nucleon magnetic moments are anomalous and were unexpected when they were discovered in the 1930s. The proton's magnetic moment, symbol μ, is , whereas, if the proton were an elementary Dirac particle, it should have a magnetic moment of . Here the unit for the magnetic moments is the nuclear magneton, symbol μ, an atomic-scale unit of measure. The neutron's magnetic moment is μ = , whereas, since the neutron lacks an electric charge, it should have no magnetic moment. The value of the neutron's magnetic moment is negative because the direction of the moment is opposite to the neutron's spin. The nucleon magnetic moments arise from the quark substructure of the nucleons. The proton magnetic moment is exploited for NMR / MRI scanning.
Stability
A neutron in free state is an unstable particle, with a half-life around ten minutes. It undergoes decay (a type of radioactive decay) by turning into a proton while emitting an electron and an electron antineutrino. This reaction can occur because the mass of the neutron is slightly greater than that of the proton. (See the Neutron article for more discussion of neutron decay.) A proton by itself is thought to be stable, or at least its lifetime is too long to measure. This is an important discussion in particle physics (see Proton decay).
Inside a nucleus, on the other hand, combined protons and neutrons (nucleons) can be stable or unstable depending on the nuclide, or nuclear species. Inside some nuclides, a neutron can turn into a proton (producing other particles) as described above; the reverse can happen inside other nuclides, where a proton turns into a neutron (producing other particles) through decay or electron capture. And inside still other nuclides, both protons and neutrons are stable and do not change form.
Antinucleons
Both nucleons have corresponding antiparticles: the antiproton and the antineutron, which have the same mass and opposite charge as the proton and neutron respectively, and they interact in the same way. (This is generally believed to be exactly true, due to CPT symmetry. If there is a difference, it is too small to measure in all experiments to date.) In particular, antinucleons can bind into an "antinucleus". So far, scientists have created antideuterium and antihelium-3 nuclei.
Tables of detailed properties
Nucleons
The masses of the proton and neutron are known with far greater precision in daltons (Da) than in MeV/c2 due to the way in which these are defined. The conversion factor used is 1 Da = .
At least 1035 years. See proton decay.
For free neutrons; in most common nuclei, neutrons are stable.
The masses of their antiparticles are assumed to be identical, and no experiments have refuted this to date. Current experiments show any relative difference between the masses of the proton and antiproton must be less than and the difference between the neutron and antineutron masses is on the order of .
Nucleon resonances
Nucleon resonances are excited states of nucleon particles, often corresponding to one of the quarks having a flipped spin state, or with different orbital angular momentum when the particle decays. Only resonances with a 3- or 4-star rating at the Particle Data Group (PDG) are included in this table. Due to their extraordinarily short lifetimes, many properties of these particles are still under investigation.
The symbol format is given as N() , where is the particle's approximate mass, is the orbital angular momentum (in the spectroscopic notation) of the nucleon–meson pair, produced when it decays, and and are the particle's isospin and total angular momentum respectively. Since nucleons are defined as having isospin, the first number will always be 1, and the second number will always be odd. When discussing nucleon resonances, sometimes the N is omitted and the order is reversed, in the form (); for example, a proton can be denoted as "N(939) S11" or "S11 (939)".
The table below lists only the base resonance; each individual entry represents 4 baryons: 2 nucleon resonances particles and their 2 antiparticles. Each resonance exists in a form with a positive electric charge (), with a quark composition of like the proton, and a neutral form, with a quark composition of like the neutron, as well as the corresponding antiparticles with antiquark compositions of and respectively. Since they contain no strange, charm, bottom, or top quarks, these particles do not possess strangeness, etc.
The table only lists the resonances with an isospin = . For resonances with isospin = , see the article on Delta baryons.
† The P11(939) nucleon represents the excited state of a normal proton or neutron. Such a particle may be stable when in an atomic nucleus, e.g. in lithium-6.
Quark model classification
In the quark model with SU(2) flavour, the two nucleons are part of the ground-state doublet. The proton has quark content of uud, and the neutron, udd. In SU(3) flavour, they are part of the ground-state octet (8) of spin- baryons, known as the Eightfold way. The other members of this octet are the hyperons strange isotriplet , , , the and the strange isodoublet , . One can extend this multiplet in SU(4) flavour (with the inclusion of the charm quark) to the ground-state 20-plet, or to SU(6) flavour (with the inclusion of the top and bottom quarks) to the ground-state 56-plet.
The article on isospin provides an explicit expression for the nucleon wave functions in terms of the quark flavour eigenstates.
Models
Although it is known that the nucleon is made from three quarks, , it is not known how to solve the equations of motion for quantum chromodynamics. Thus, the study of the low-energy properties of the nucleon are performed by means of models. The only first-principles approach available is to attempt to solve the equations of QCD numerically, using lattice QCD. This requires complicated algorithms and very powerful supercomputers. However, several analytic models also exist:
Skyrmion models
The skyrmion models the nucleon as a topological soliton in a nonlinear SU(2) pion field. The topological stability of the skyrmion is interpreted as the conservation of baryon number, that is, the non-decay of the nucleon. The local topological winding number density is identified with the local baryon number density of the nucleon. With the pion isospin vector field oriented in the shape of a hedgehog space, the model is readily solvable, and is thus sometimes called the hedgehog model. The hedgehog model is able to predict low-energy parameters, such as the nucleon mass, radius and axial coupling constant, to approximately 30% of experimental values.
MIT bag model
The MIT bag model confines quarks and gluons interacting through quantum chromodynamics to a region of space determined by balancing the pressure exerted by the quarks and gluons against a hypothetical pressure exerted by the vacuum on all colored quantum fields. The simplest approximation to the model confines three non-interacting quarks to a spherical cavity, with the boundary condition that the quark vector current vanish on the boundary. The non-interacting treatment of the quarks is justified by appealing to the idea of asymptotic freedom, whereas the hard-boundary condition is justified by quark confinement.
Mathematically, the model vaguely resembles that of a radar cavity, with solutions to the Dirac equation standing in for solutions to the Maxwell equations, and the vanishing vector current boundary condition standing for the conducting metal walls of the radar cavity. If the radius of the bag is set to the radius of the nucleon, the bag model predicts a nucleon mass that is within 30% of the actual mass.
Although the basic bag model does not provide a pion-mediated interaction, it describes excellently the nucleon–nucleon forces through the 6 quark bag s-channel mechanism using the P-matrix.
Chiral bag model
The chiral bag model merges the MIT bag model and the skyrmion model. In this model, a hole is punched out of the middle of the skyrmion and replaced with a bag model. The boundary condition is provided by the requirement of continuity of the axial vector current across the bag boundary.
Very curiously, the missing part of the topological winding number (the baryon number) of the hole punched into the skyrmion is exactly made up by the non-zero vacuum expectation value (or spectral asymmetry) of the quark fields inside the bag. , this remarkable trade-off between topology and the spectrum of an operator does not have any grounding or explanation in the mathematical theory of Hilbert spaces and their relationship to geometry.
Several other properties of the chiral bag are notable: It provides a better fit to the low-energy nucleon properties, to within 5–10%, and these are almost completely independent of the chiral-bag radius, as long as the radius is less than the nucleon radius. This independence of radius is referred to as the Cheshire Cat principle, after the fading of Lewis Carroll's Cheshire Cat to just its smile. It is expected that a first-principles solution of the equations of QCD will demonstrate a similar duality of quark–meson descriptions.
See also
SLAC bag model
Hadrons
Electroweak interaction
Footnotes
References
Particle listings
Further reading
Hadrons
Baryons
Neutron | Nucleon | [
"Physics"
] | 3,104 | [
"Matter",
"Nucleons",
"Hadrons",
"Nuclear physics",
"Subatomic particles"
] |
22,151 | https://en.wikipedia.org/wiki/Nuclear%20reactor | A nuclear reactor is a device used to initiate and control a fission nuclear chain reaction. They are used for commercial electricity, marine propulsion, weapons production and research. When a fissile nucleus, usually uranium-235 or plutonium-239, absorbs a neutron, it splits into lighter nuclei, releasing energy, gamma radiation, and free neutrons, which can induce further fission in a self-sustaining chain reaction. Reactors stabilize this with systems of active and passive control, varying the presence of neutron absorbers and moderators in the core, maintaining criticality with delayed neutrons. Fuel efficiency is exceptionally high;low-enriched uranium has an energy density 120,000 times higher than coal.
Following the discovery of nuclear fission in 1938, many countries initiated military nuclear research programs. Early subcritical "atomic piles" sought to allow research on fission and neutronics. The American Manhattan Project made the vast majority of early breakthroughs. In 1942, the first artificial critical nuclear reactor, Chicago Pile-1, was built at the University of Chicago, by a team led by Enrico Fermi. From 1944, with the goal of weapons-grade plutonium production for fission bombs, the first large-scale reactors were operated at the American Hanford Site. The pressurized water reactor design, used in over 70% of current commercial reactors, was developed by the US Navy for submarine propulsion, beginning with the S1W in 1953. In 1954, nuclear grid electricity production began with the Soviet Obninsk AM-1 reactor.
Heat from nuclear fission is passed to a working fluid coolant. In commercial reactors, this drives turbines connected to electrical generator shafts. The heat can also be used for district heating, and industrial applications including desalination and hydrogen production. Some reactors are used to produce isotopes for medical and industrial use.
Reactors pose a nuclear proliferation risk as they can be configured to produce plutonium and tritium for nuclear weapons. Spent fuel can be reprocessed, reducing nuclear waste and recovering some reactor-usable MOX fuel. Reprocessing is used in Europe and Asia, but due to proliferation concern, the United States does not engage in or encourage reprocessing.
Reactor accidents have been caused by combinations of design and operator failure. The International Nuclear Event Scale classifies Levels 1 to 7 of radioactive material released to the environment. The 1979 Three Mile Island accident, at Level 5, and the 1986 Chernobyl disaster and 2011 Fukushima disaster, both at Level 7, all had major effects on the nuclear industry and anti-nuclear movement.
, there are 417 commercial reactors, 226 research reactors, and over 160 ships were powered with over 200 marine propulsion reactors in operation globally. Commercial reactors provide 9% of the global electricity supply, compared to 30% from renewables, together comprising low-carbon electricity.
The US Department of Energy classes reactors into generations, with the majority of the global fleet being Generation II reactors constructed from the 1960s to 1990s, and Generation IV reactors currently in development. Reactors can also be grouped by the choices of coolant and moderator. Almost 90% of global nuclear energy comes from pressurized water reactors and boiling water reactors, which use water as a coolant and moderator. Other designs include heavy water reactors, gas-cooled reactors, and fast breeder reactors, variously optimizing efficiency, safety, and fuel type, enrichment, and burnup. Small modular reactors are also an area of current development.
Operation
Just as conventional thermal power stations generate electricity by harnessing the thermal energy released from burning fossil fuels, nuclear reactors convert the energy released by controlled nuclear fission into thermal energy for further conversion to mechanical or electrical forms.
Fission
When a large fissile atomic nucleus such as uranium-235, uranium-233, or plutonium-239 absorbs a neutron, it may undergo nuclear fission. The heavy nucleus splits into two or more lighter nuclei, (the fission products), releasing kinetic energy, gamma radiation, and free neutrons. A portion of these neutrons may be absorbed by other fissile atoms and trigger further fission events, which release more neutrons, and so on. This is known as a nuclear chain reaction.
To control such a nuclear chain reaction, control rods containing neutron poisons and neutron moderators are able to change the portion of neutrons that will go on to cause more fission. Nuclear reactors generally have automatic and manual systems to shut the fission reaction down if monitoring or instrumentation detects unsafe conditions.
Heat generation
The reactor core generates heat in a number of ways:
The kinetic energy of fission products is converted to thermal energy when these nuclei collide with nearby atoms.
The reactor absorbs some of the gamma rays produced during fission and converts their energy into heat.
Heat is produced by the radioactive decay of fission products and materials that have been activated by neutron absorption. This decay heat source will remain for some time even after the reactor is shut down.
A kilogram of uranium-235 (U-235) converted via nuclear processes releases approximately three million times more energy than a kilogram of coal burned conventionally (7.2 × 1013 joules per kilogram of uranium-235 versus 2.4 × 107 joules per kilogram of coal).
The fission of one kilogram of uranium-235 releases about 19 billion kilocalories, so the energy released by 1 kg of uranium-235 corresponds to that released by burning 2.7 million kg of coal.
Cooling
A nuclear reactor coolant – usually water but sometimes a gas or a liquid metal (like liquid sodium or lead) or molten salt – is circulated past the reactor core to absorb the heat that it generates. The heat is carried away from the reactor and is then used to generate steam. Most reactor systems employ a cooling system that is physically separated from the water that will be boiled to produce pressurized steam for the turbines, like the pressurized water reactor. However, in some reactors the water for the steam turbines is boiled directly by the reactor core; for example the boiling water reactor.
Reactivity control
The rate of fission reactions within a reactor core can be adjusted by controlling the quantity of neutrons that are able to induce further fission events. Nuclear reactors typically employ several methods of neutron control to adjust the reactor's power output. Some of these methods arise naturally from the physics of radioactive decay and are simply accounted for during the reactor's operation, while others are mechanisms engineered into the reactor design for a distinct purpose.
The fastest method for adjusting levels of fission-inducing neutrons in a reactor is via movement of the control rods. Control rods are made of so-called neutron poisons and therefore absorb neutrons. When a control rod is inserted deeper into the reactor, it absorbs more neutrons than the material it displaces – often the moderator. This action results in fewer neutrons available to cause fission and reduces the reactor's power output. Conversely, extracting the control rod will result in an increase in the rate of fission events and an increase in power.
The physics of radioactive decay also affects neutron populations in a reactor. One such process is delayed neutron emission by a number of neutron-rich fission isotopes. These delayed neutrons account for about 0.65% of the total neutrons produced in fission, with the remainder (termed "prompt neutrons") released immediately upon fission. The fission products which produce delayed neutrons have half-lives for their decay by neutron emission that range from milliseconds to as long as several minutes, and so considerable time is required to determine exactly when a reactor reaches the critical point. Keeping the reactor in the zone of chain reactivity where delayed neutrons are necessary to achieve a critical mass state allows mechanical devices or human operators to control a chain reaction in "real time"; otherwise the time between achievement of criticality and nuclear meltdown as a result of an exponential power surge from the normal nuclear chain reaction, would be too short to allow for intervention. This last stage, where delayed neutrons are no longer required to maintain criticality, is known as the prompt critical point. There is a scale for describing criticality in numerical form, in which bare criticality is known as zero dollars and the prompt critical point is one dollar, and other points in the process interpolated in cents.
In some reactors, the coolant also acts as a neutron moderator. A moderator increases the power of the reactor by causing the fast neutrons that are released from fission to lose energy and become thermal neutrons. Thermal neutrons are more likely than fast neutrons to cause fission. If the coolant is a moderator, then temperature changes can affect the density of the coolant/moderator and therefore change power output. A higher temperature coolant would be less dense, and therefore a less effective moderator.
In other reactors, the coolant acts as a poison by absorbing neutrons in the same way that the control rods do. In these reactors, power output can be increased by heating the coolant, which makes it a less dense poison. Nuclear reactors generally have automatic and manual systems to scram the reactor in an emergency shut down. These systems insert large amounts of poison (often boron in the form of boric acid) into the reactor to shut the fission reaction down if unsafe conditions are detected or anticipated.
Most types of reactors are sensitive to a process variously known as xenon poisoning, or the iodine pit. The common fission product Xenon-135 produced in the fission process acts as a neutron poison that absorbs neutrons and therefore tends to shut the reactor down. Xenon-135 accumulation can be controlled by keeping power levels high enough to destroy it by neutron absorption as fast as it is produced. Fission also produces iodine-135, which in turn decays (with a half-life of 6.57 hours) to new xenon-135. When the reactor is shut down, iodine-135 continues to decay to xenon-135, making restarting the reactor more difficult for a day or two, as the xenon-135 decays into cesium-135, which is not nearly as poisonous as xenon-135, with a half-life of 9.2 hours. This temporary state is the "iodine pit." If the reactor has sufficient extra reactivity capacity, it can be restarted. As the extra xenon-135 is transmuted to xenon-136, which is much less a neutron poison, within a few hours the reactor experiences a "xenon burnoff (power) transient". Control rods must be further inserted to replace the neutron absorption of the lost xenon-135. Failure to properly follow such a procedure was a key step in the Chernobyl disaster.
Reactors used in nuclear marine propulsion (especially nuclear submarines) often cannot be run at continuous power around the clock in the same way that land-based power reactors are normally run, and in addition often need to have a very long core life without refueling. For this reason many designs use highly enriched uranium but incorporate burnable neutron poison in the fuel rods. This allows the reactor to be constructed with an excess of fissionable material, which is nevertheless made relatively safe early in the reactor's fuel burn cycle by the presence of the neutron-absorbing material which is later replaced by normally produced long-lived neutron poisons (far longer-lived than xenon-135) which gradually accumulate over the fuel load's operating life.
Electrical power generation
The energy released in the fission process generates heat, some of which can be converted into usable energy. A common method of harnessing this thermal energy is to use it to boil water to produce pressurized steam which will then drive a steam turbine that turns an alternator and generates electricity.
Life-times
Modern nuclear power plants are typically designed for a lifetime of 60 years, while older reactors were built with a planned typical lifetime of 30–40 years, though many of those have received renovations and life extensions of 15–20 years. Some believe nuclear power plants can operate for as long as 80 years or longer with proper maintenance and management. While most components of a nuclear power plant, such as steam generators, are replaced when they reach the end of their useful lifetime, the overall lifetime of the power plant is limited by the life of components that cannot be replaced when aged by wear and neutron embrittlement, such as the reactor pressure vessel. At the end of their planned life span, plants may get an extension of the operating license for some 20 years and in the US even a "subsequent license renewal" (SLR) for an additional 20 years.
Even when a license is extended, it does not guarantee the reactor will continue to operate, particularly in the face of safety concerns or incident. Many reactors are closed long before their license or design life expired and are decommissioned. The costs for replacements or improvements required for continued safe operation may be so high that they are not cost-effective. Or they may be shut down due to technical failure. Other ones have been shut down because the area was contaminated, like Fukushima, Three Mile Island, Sellafield, and Chernobyl. The British branch of the French concern EDF Energy, for example, extended the operating lives of its Advanced Gas-cooled Reactors (AGR) with only between 3 and 10 years. All seven AGR plants were expected to be shut down in 2022 and in decommissioning by 2028. Hinkley Point B was extended from 40 to 46 years, and closed. The same happened with Hunterston B, also after 46 years.
An increasing number of reactors is reaching or crossing their design lifetimes of 30 or 40 years. In 2014, Greenpeace warned that the lifetime extension of ageing nuclear power plants amounts to entering a new era of risk. It estimated the current European nuclear liability coverage in average to be too low by a factor of between 100 and 1,000 to cover the likely costs, while at the same time, the likelihood of a serious accident happening in Europe continues to increase as the reactor fleet grows older.
Early reactors
The neutron was discovered in 1932 by British physicist James Chadwick. The concept of a nuclear chain reaction brought about by nuclear reactions mediated by neutrons was first realized shortly thereafter, by Hungarian scientist Leó Szilárd, in 1933. He filed a patent for his idea of a simple reactor the following year while working at the Admiralty in London, England. However, Szilárd's idea did not incorporate the idea of nuclear fission as a neutron source, since that process was not yet discovered. Szilárd's ideas for nuclear reactors using neutron-mediated nuclear chain reactions in light elements proved unworkable.
Inspiration for a new type of reactor using uranium came from the discovery by Otto Hahn, Lise Meitner, and Fritz Strassmann in 1938 that bombardment of uranium with neutrons (provided by an alpha-on-beryllium fusion reaction, a "neutron howitzer") produced a barium residue, which they reasoned was created by fission of the uranium nuclei. In their second publication on nuclear fission in February 1939, Hahn and Strassmann predicted the existence and liberation of additional neutrons during the fission process, opening the possibility of a nuclear chain reaction. Subsequent studies in early 1939 (one of them by Szilárd and Fermi), revealed that several neutrons were indeed released during fission, making available the opportunity for the nuclear chain reaction that Szilárd had envisioned six years previously.
On 2 August 1939, Albert Einstein signed a letter to President Franklin D. Roosevelt (written by Szilárd) suggesting that the discovery of uranium's fission could lead to the development of "extremely powerful bombs of a new type", giving impetus to the study of reactors and fission. Szilárd and Einstein knew each other well and had worked together years previously, but Einstein had never thought about this possibility for nuclear energy until Szilard reported it to him, at the beginning of his quest to produce the Einstein-Szilárd letter to alert the U.S. government.
Shortly after, Nazi Germany invaded Poland in 1939, starting World War II in Europe. The U.S. was not yet officially at war, but in October, when the Einstein-Szilárd letter was delivered to him, Roosevelt commented that the purpose of doing the research was to make sure "the Nazis don't blow us up." The U.S. nuclear project followed, although with some delay as there remained skepticism (some of it from Enrico Fermi) and also little action from the small number of officials in the government who were initially charged with moving the project forward.
The following year, the U.S. Government received the Frisch–Peierls memorandum from the UK, which stated that the amount of uranium needed for a chain reaction was far lower than had previously been thought. The memorandum was a product of the MAUD Committee, which was working on the UK atomic bomb project, known as Tube Alloys, later to be subsumed within the Manhattan Project.
Eventually, the first artificial nuclear reactor, Chicago Pile-1, was constructed at the University of Chicago, by a team led by Italian physicist Enrico Fermi, in late 1942. By this time, the program had been pressured for a year by U.S. entry into the war. The Chicago Pile achieved criticality on 2 December 1942 at 3:25 PM. The reactor support structure was made of wood, which supported a pile (hence the name) of graphite blocks, embedded in which was natural uranium oxide 'pseudospheres' or 'briquettes'.
Soon after the Chicago Pile, the Metallurgical Laboratory developed a number of nuclear reactors for the Manhattan Project starting in 1943. The primary purpose for the largest reactors (located at the Hanford Site in Washington), was the mass production of plutonium for nuclear weapons. Fermi and Szilard applied for a patent on reactors on 19 December 1944. Its issuance was delayed for 10 years because of wartime secrecy.
"World's first nuclear power plant" is the claim made by signs at the site of the EBR-I, which is now a museum near Arco, Idaho. Originally called "Chicago Pile-4", it was carried out under the direction of Walter Zinn for Argonne National Laboratory. This experimental LMFBR operated by the U.S. Atomic Energy Commission produced 0.8 kW in a test on 20 December 1951 and 100 kW (electrical) the following day, having a design output of 200 kW (electrical).
Besides the military uses of nuclear reactors, there were political reasons to pursue civilian use of atomic energy. U.S. President Dwight Eisenhower made his famous Atoms for Peace speech to the UN General Assembly on 8 December 1953. This diplomacy led to the dissemination of reactor technology to U.S. institutions and worldwide.
The first nuclear power plant built for civil purposes was the AM-1 Obninsk Nuclear Power Plant, launched on 27 June 1954 in the Soviet Union. It produced around 5 MW (electrical). It was built after the F-1 (nuclear reactor) which was the first reactor to go critical in Europe, and was also built by the Soviet Union.
After World War II, the U.S. military sought other uses for nuclear reactor technology. Research by the Army led to the power stations for Camp Century, Greenland and McMurdo Station, Antarctica Army Nuclear Power Program. The Air Force Nuclear Bomber project resulted in the Molten-Salt Reactor Experiment. The U.S. Navy succeeded when they steamed the USS Nautilus (SSN-571) on nuclear power 17 January 1955.
The first commercial nuclear power station, Calder Hall in Sellafield, England was opened in 1956 with an initial capacity of 50 MW (later 200 MW).
The first portable nuclear reactor "Alco PM-2A" was used to generate electrical power (2 MW) for Camp Century from 1960 to 1963.
Table by date
Table by country
Reactor types
Classifications
By type of nuclear reaction
All commercial power reactors are based on nuclear fission. They generally use uranium and its product plutonium as nuclear fuel, though a thorium fuel cycle is also possible. Fission reactors can be divided roughly into two classes, depending on the energy of the neutrons that sustain the fission chain reaction:
Thermal-neutron reactors use slowed or thermal neutrons to keep up the fission of their fuel. Almost all current reactors are of this type. These contain neutron moderator materials that slow neutrons until their neutron temperature is thermalized, that is, until their kinetic energy approaches the average kinetic energy of the surrounding particles. Thermal neutrons have a far higher cross section (probability) of fissioning the fissile nuclei uranium-235, plutonium-239, and plutonium-241, and a relatively lower probability of neutron capture by uranium-238 (U-238) compared to the faster neutrons that originally result from fission, allowing use of low-enriched uranium or even natural uranium fuel. The moderator is often also the coolant, usually water under high pressure to increase the boiling point. These are surrounded by a reactor vessel, instrumentation to monitor and control the reactor, radiation shielding, and a containment building.
Fast-neutron reactors use fast neutrons to cause fission in their fuel. They do not have a neutron moderator, and use less-moderating coolants. Maintaining a chain reaction requires the fuel to be more highly enriched in fissile material (about 20% or more) due to the relatively lower probability of fission versus capture by U-238. Fast reactors have the potential to produce less transuranic waste because all actinides are fissionable with fast neutrons, but they are more difficult to build and more expensive to operate. Overall, fast reactors are less common than thermal reactors in most applications. Some early power stations were fast reactors, as are some Russian naval propulsion units. Construction of prototypes is continuing (see fast breeder or generation IV reactors).
In principle, fusion power could be produced by nuclear fusion of elements such as the deuterium isotope of hydrogen. While an ongoing rich research topic since at least the 1940s, no self-sustaining fusion reactor for any purpose has ever been built.
By moderator material
Used by thermal reactors:
Graphite-moderated reactors
Mostly early reactors such as the Chicago pile, Obninsk am 1, Windscale piles, RBMK, Magnox, and others such as AGR use graphite as a moderator.
Water moderated reactors
Heavy-water reactors (Used in Canada, India, Argentina, China, Pakistan, Romania and South Korea).
Light-water-moderated reactors (LWRs). Light-water reactors (the most common type of thermal reactor) use ordinary water to moderate and cool the reactors. Because the light hydrogen isotope is a slight neutron poison, these reactors need artificially enriched fuels. When at operating temperature, if the temperature of the water increases, its density drops, and fewer neutrons passing through it are slowed enough to trigger further reactions. That negative feedback stabilizes the reaction rate. Graphite and heavy-water reactors tend to be more thoroughly thermalized than light water reactors. Due to the extra thermalization, and the absence of the light hydrogen poisoning effects these types can use natural uranium/unenriched fuel.
Light-element-moderated reactors.
Molten-salt reactors (MSRs) are moderated by light elements such as lithium or beryllium, which are constituents of the coolant/fuel matrix salts "LiF" and "BeF2", "LiCl" and "BeCl2" and other light element containing salts can all cause a moderating effect.
Liquid metal cooled reactors, such as those whose coolant is a mixture of lead and bismuth, may use BeO as a moderator.
Organically moderated reactors (OMR) use biphenyl and terphenyl as moderator and coolant.
By coolant
Water cooled reactor. These constitute the great majority of operational nuclear reactors: as of 2014, 93% of the world's nuclear reactors are water cooled, providing about 95% of the world's total nuclear generation capacity.
Pressurized water reactor (PWR) Pressurized water reactors constitute the large majority of all Western nuclear power plants.
A primary characteristic of PWRs is a pressurizer, a specialized pressure vessel. Most commercial PWRs and naval reactors use pressurizers. During normal operation, a pressurizer is partially filled with water, and a steam bubble is maintained above it by heating the water with submerged heaters. During normal operation, the pressurizer is connected to the primary reactor pressure vessel (RPV) and the pressurizer "bubble" provides an expansion space for changes in water volume in the reactor. This arrangement also provides a means of pressure control for the reactor by increasing or decreasing the steam pressure in the pressurizer using the pressurizer heaters.
Pressurized heavy water reactors are a subset of pressurized water reactors, sharing the use of a pressurized, isolated heat transport loop, but using heavy water as coolant and moderator for the greater neutron economies it offers.
Boiling water reactor (BWR)
BWRs are characterized by boiling water around the fuel rods in the lower portion of a primary reactor pressure vessel. A boiling water reactor uses 235U, enriched as uranium dioxide, as its fuel. The fuel is assembled into rods housed in a steel vessel that is submerged in water. The nuclear fission causes the water to boil, generating steam. This steam flows through pipes into turbines. The turbines are driven by the steam, and this process generates electricity. During normal operation, pressure is controlled by the amount of steam flowing from the reactor pressure vessel to the turbine.
Supercritical water reactor (SCWR)
SCWRs are a Generation IV reactor concept where the reactor is operated at supercritical pressures and water is heated to a supercritical fluid, which never undergoes a transition to steam yet behaves like saturated steam, to power a steam generator.
Reduced moderation water reactor [RMWR] which use more highly enriched fuel with the fuel elements set closer together to allow a faster neutron spectrum sometimes called an Epithermal neutron Spectrum.
Pool-type reactor can refer to unpressurized water cooled open pool reactors, but not to be confused with pool type LMFBRs which are sodium cooled
Some reactors have been cooled by heavy water which also served as a moderator. Examples include:
Early CANDU reactors (later ones use heavy water moderator but light water coolant)
DIDO class research reactors
Liquid metal cooled reactor. Since water is a moderator, it cannot be used as a coolant in a fast reactor. Liquid metal coolants have included sodium, NaK, lead, lead-bismuth eutectic, and in early reactors, mercury.
Sodium-cooled fast reactor
Lead-cooled fast reactor
Gas cooled reactors are cooled by a circulating gas. In commercial nuclear power plants carbon dioxide has usually been used, for example in current British AGR nuclear power plants and formerly in a number of first generation British, French, Italian, and Japanese plants. Nitrogen and helium have also been used, helium being considered particularly suitable for high temperature designs. Use of the heat varies, depending on the reactor. Commercial nuclear power plants run the gas through a heat exchanger to make steam for a steam turbine. Some experimental designs run hot enough that the gas can directly power a gas turbine.
Molten-salt reactors (MSRs) are cooled by circulating a molten salt, typically a eutectic mixture of fluoride salts, such as FLiBe. In a typical MSR, the coolant is also used as a matrix in which the fissile material is dissolved. Other eutectic salt combinations used include "ZrF4" with "NaF" and "LiCl" with "BeCl2".
Organic nuclear reactors use organic fluids such as biphenyl and terphenyl as coolant rather than water.
By generation
Generation I reactor (early prototypes such as Shippingport Atomic Power Station, research reactors, non-commercial power producing reactors)
Generation II reactor (most current nuclear power plants, 1965–1996)
Generation III reactor (evolutionary improvements of existing designs, 1996–2016)
Generation III+ reactor (evolutionary development of Gen III reactors, offering improvements in safety over Gen III reactor designs, 2017–2021)
Generation IV reactor (technologies still under development; unknown start date, see below)
Generation V reactor (designs which are theoretically possible, but which are not being actively considered or researched at present).
In 2003, the French Commissariat à l'Énergie Atomique (CEA) was the first to refer to "Gen II" types in Nucleonics Week.
The first mention of "Gen III" was in 2000, in conjunction with the launch of the Generation IV International Forum (GIF) plans.
"Gen IV" was named in 2000, by the United States Department of Energy (DOE), for developing new plant types.
By phase of fuel
Solid fueled
Fluid fueled
Aqueous homogeneous reactor
Molten-salt reactor
Gas fueled (theoretical)
By shape of the core
Cubical
Cylindrical
Octagonal
Spherical
Slab
Annulus
By use
Electricity
Nuclear power plants including small modular reactors
Propulsion, see nuclear propulsion
Nuclear marine propulsion
Various proposed forms of rocket propulsion
Other uses of heat
Desalination
Heat for domestic and industrial heating
Hydrogen production for use in a hydrogen economy
Production reactors for transmutation of elements
Breeder reactors are capable of producing more fissile material than they consume during the fission chain reaction (by converting fertile U-238 to Pu-239, or Th-232 to U-233). Thus, a uranium breeder reactor, once running, can be refueled with natural or even depleted uranium, and a thorium breeder reactor can be refueled with thorium; however, an initial stock of fissile material is required.
Creating various radioactive isotopes, such as americium for use in smoke detectors, and cobalt-60, molybdenum-99 and others, used for imaging and medical treatment.
Production of materials for nuclear weapons such as weapons-grade plutonium
Providing a source of neutron radiation (for example with the pulsed Godiva device) and positron radiation (e.g. neutron activation analysis and potassium-argon dating)
Research reactor: Typically reactors used for research and training, materials testing, or the production of radioisotopes for medicine and industry. These are much smaller than power reactors or those propelling ships, and many are on university campuses. There are about 280 such reactors operating, in 56 countries. Some operate with high-enriched uranium fuel, and international efforts are underway to substitute low-enriched fuel.
Current technologies
Pressurized water reactors (PWR) [moderator: high-pressure water; coolant: high-pressure water]
These reactors use a pressure vessel to contain the nuclear fuel, control rods, moderator, and coolant. The hot radioactive water that leaves the pressure vessel is looped through a steam generator, which in turn heats a secondary (nonradioactive) loop of water to steam that can run turbines. They represent the majority (around 80%) of current reactors. This is a thermal neutron reactor design, the newest of which are the Russian VVER-1200, Japanese Advanced Pressurized Water Reactor, American AP1000, Chinese Hualong Pressurized Reactor and the Franco-German European Pressurized Reactor. All the United States Naval reactors are of this type.
Boiling water reactors (BWR) [moderator: low-pressure water; coolant: low-pressure water]
A BWR is like a PWR without the steam generator. The lower pressure of its cooling water allows it to boil inside the pressure vessel, producing the steam that runs the turbines. Unlike a PWR, there is no primary and secondary loop. The thermal efficiency of these reactors can be higher, and they can be simpler, and even potentially more stable and safe. This is a thermal-neutron reactor design, the newest of which are the Advanced Boiling Water Reactor and the Economic Simplified Boiling Water Reactor.
Pressurized Heavy Water Reactor (PHWR) [moderator: high-pressure heavy water; coolant: high-pressure heavy water]
A Canadian design (known as CANDU), very similar to PWRs but using heavy water. While heavy water is significantly more expensive than ordinary water, it has greater neutron economy (creates a higher number of thermal neutrons), allowing the reactor to operate without fuel enrichment facilities. Instead of using a single large pressure vessel as in a PWR, the fuel is contained in hundreds of pressure tubes. These reactors are fueled with natural uranium and are thermal-neutron reactor designs. PHWRs can be refueled while at full power, (online refueling) which makes them very efficient in their use of uranium (it allows for precise flux control in the core). CANDU PHWRs have been built in Canada, Argentina, China, India, Pakistan, Romania, and South Korea. India also operates a number of PHWRs, often termed 'CANDU derivatives', built after the Government of Canada halted nuclear dealings with India following the 1974 Smiling Buddha nuclear weapon test.
Reaktor Bolshoy Moschnosti Kanalniy (High Power Channel Reactor) (RBMK) (also known as a Light-Water Graphite-moderated Reactor—LWGR) [moderator: graphite; coolant: high-pressure water]
A Soviet design, RBMKs are in some respects similar to CANDU in that they can be refueled during power operation and employ a pressure tube design instead of a PWR-style pressure vessel. However, unlike CANDU they are unstable and large, making containment buildings for them expensive. A series of critical safety flaws have also been identified with the RBMK design, though some of these were corrected following the Chernobyl disaster. Their main attraction is their use of light water and unenriched uranium. As of 2024, 7 remain open, mostly due to safety improvements and help from international safety agencies such as the U.S. Department of Energy. Despite these safety improvements, RBMK reactors are still considered one of the most dangerous reactor designs in use. RBMK reactors were deployed only in the former Soviet Union.
Gas-cooled reactor (GCR) and advanced gas-cooled reactor (AGR) [moderator: graphite; coolant: carbon dioxide]
These designs have a high thermal efficiency compared with PWRs due to higher operating temperatures. There are a number of operating reactors of this design, mostly in the United Kingdom, where the concept was developed. Older designs (i.e. Magnox stations) are either shut down or will be in the near future. However, the AGRs have an anticipated life of a further 10 to 20 years. This is a thermal-neutron reactor design. Decommissioning costs can be high due to the large volume of the reactor core.
Liquid metal fast-breeder reactor (LMFBR) [moderator: none; coolant: liquid metal]
This totally unmoderated reactor design produces more fuel than it consumes. They are said to "breed" fuel, because they produce fissionable fuel during operation because of neutron capture. These reactors can function much like a PWR in terms of efficiency, and do not require much high-pressure containment, as the liquid metal does not need to be kept at high pressure, even at very high temperatures. These reactors are fast neutron, not thermal neutron designs. These reactors come in two types:
Lead-cooled
Using lead as the liquid metal provides excellent radiation shielding, and allows for operation at very high temperatures. Also, lead is (mostly) transparent to neutrons, so fewer neutrons are lost in the coolant, and the coolant does not become radioactive. Unlike sodium, lead is mostly inert, so there is less risk of explosion or accident, but such large quantities of lead may be problematic from toxicology and disposal points of view. Often a reactor of this type would use a lead-bismuth eutectic mixture. In this case, the bismuth would present some minor radiation problems, as it is not quite as transparent to neutrons, and can be transmuted to a radioactive isotope more readily than lead. The Russian Alfa class submarine uses a lead-bismuth-cooled fast reactor as its main power plant.
Sodium-cooled
Most LMFBRs are of this type. The TOPAZ, BN-350 and BN-600 in USSR; Superphénix in France; and Fermi-I in the United States were reactors of this type. The sodium is relatively easy to obtain and work with, and it also manages to actually prevent corrosion on the various reactor parts immersed in it. However, sodium explodes violently when exposed to water, so care must be taken, but such explosions would not be more violent than (for example) a leak of superheated fluid from a pressurized-water reactor. The Monju reactor in Japan suffered a sodium leak in 1995 and could not be restarted until May 2010. The EBR-I, the first reactor to have a core meltdown, in 1955, was also a sodium-cooled reactor.
Pebble-bed reactors (PBR) [moderator: graphite; coolant: helium]
These use fuel molded into ceramic balls, and then circulate gas through the balls. The result is an efficient, low-maintenance, very safe reactor with inexpensive, standardized fuel. The prototypes were the AVR and the THTR-300 in Germany, which produced up to 308MW of electricity between 1985 and 1989 until it was shut down after experiencing a series of incidents and technical difficulties. The HTR-10 is operating in China, where the HTR-PM is being developed. The HTR-PM is expected to be the first generation IV reactor to enter operation.
Molten-salt reactors (MSR) [moderator: graphite, or none for fast spectrum MSRs; coolant: molten salt mixture]
These dissolve the fuels in fluoride or chloride salts, or use such salts for coolant. MSRs potentially have many safety features, including the absence of high pressures or highly flammable components in the core. They were initially designed for aircraft propulsion due to their high efficiency and high power density. One prototype, the Molten-Salt Reactor Experiment, was built to confirm the feasibility of the Liquid fluoride thorium reactor, a thermal spectrum reactor which would breed fissile uranium-233 fuel from thorium.
Aqueous homogeneous reactor (AHR) [moderator: high-pressure light or heavy water; coolant: high-pressure light or heavy water]
These reactors use as fuel soluble nuclear salts (usually uranium sulfate or uranium nitrate) dissolved in water and mixed with the coolant and the moderator. As of April 2006, only five AHRs were in operation.
Future and developing technologies
Advanced reactors
More than a dozen advanced reactor designs are in various stages of development. Some are evolutionary from the PWR, BWR and PHWR designs above, and some are more radical departures. The former include the advanced boiling water reactor (ABWR), two of which are now operating with others under construction, and the planned passively safe Economic Simplified Boiling Water Reactor (ESBWR) and AP1000 units (see Nuclear Power 2010 Program).
The integral fast reactor (IFR) was built, tested and evaluated during the 1980s and then retired under the Clinton administration in the 1990s due to nuclear non-proliferation policies of the administration. Recycling spent fuel is the core of its design and it therefore produces only a fraction of the waste of current reactors.
The pebble-bed reactor, a high-temperature gas-cooled reactor (HTGCR), is designed so high temperatures reduce power output by Doppler broadening of the fuel's neutron cross-section. It uses ceramic fuels so its safe operating temperatures exceed the power-reduction temperature range. Most designs are cooled by inert helium. Helium is not subject to steam explosions, resists neutron absorption leading to radioactivity, and does not dissolve contaminants that can become radioactive. Typical designs have more layers (up to 7) of passive containment than light water reactors (usually 3). A unique feature that may aid safety is that the fuel balls actually form the core's mechanism, and are replaced one by one as they age. The design of the fuel makes fuel reprocessing expensive.
The small, sealed, transportable, autonomous reactor (SSTAR) is being primarily researched and developed in the US, intended as a fast breeder reactor that is passively safe and could be remotely shut down in case the suspicion arises that it is being tampered with.
The Clean and Environmentally Safe Advanced Reactor (CAESAR) is a nuclear reactor concept that uses steam as a moderator – this design is in development.
The reduced moderation water reactor builds upon the Advanced boiling water reactor ABWR) that is presently in use. It is not a complete fast reactor instead using mostly epithermal neutrons, which are between thermal and fast neutrons in speed.
The hydrogen-moderated self-regulating nuclear power module (HPM) is a reactor design emanating from the Los Alamos National Laboratory that uses uranium hydride as fuel.
Subcritical reactors are designed to be safer and more stable, but pose a number of engineering and economic difficulties. One example is the energy amplifier.
Thorium-based reactors – It is possible to convert Thorium-232 into U-233 in reactors specially designed for the purpose. In this way, thorium, which is four times more abundant than uranium, can be used to breed U-233 nuclear fuel. U-233 is also believed to have favourable nuclear properties as compared to traditionally used U-235, including better neutron economy and lower production of long lived transuranic waste.
Advanced heavy-water reactor (AHWR) – A proposed heavy water moderated nuclear power reactor that will be the next generation design of the PHWR type. Under development in the Bhabha Atomic Research Centre (BARC), India.
KAMINI – A unique reactor using Uranium-233 isotope for fuel. Built in India by BARC and Indira Gandhi Center for Atomic Research (IGCAR).
India is also planning to build fast breeder reactors using the thorium – Uranium-233 fuel cycle. The FBTR (Fast Breeder Test Reactor) in operation at Kalpakkam (India) uses Plutonium as a fuel and liquid sodium as a coolant.
China, which has control of the Cerro Impacto deposit, has a reactor and hopes to replace coal energy with nuclear energy.
Rolls-Royce aims to sell nuclear reactors for the production of synfuel for aircraft.
Generation IV reactors
Generation IV reactors are a set of theoretical nuclear reactor designs. These are generally not expected to be available for commercial use before 2040–2050, although the World Nuclear Association suggested that some might enter commercial operation before 2030. Current reactors in operation around the world are generally considered second- or third-generation systems, with the first-generation systems having been retired some time ago. Research into these reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals. The primary goals being to improve nuclear safety, improve proliferation resistance, minimize waste and natural resource utilization, and to decrease the cost to build and run such plants.
Gas-cooled fast reactor
Lead-cooled fast reactor
Molten-salt reactor
Sodium-cooled fast reactor
Supercritical water reactor
Very-high-temperature reactor
Generation V+ reactors
Generation V reactors are designs which are theoretically possible, but which are not being actively considered or researched at present. Though some generation V reactors could potentially be built with current or near term technology, they trigger little interest for reasons of economics, practicality, or safety.
Liquid-core reactor. A closed loop liquid-core nuclear reactor, where the fissile material is molten uranium or uranium solution cooled by a working gas pumped in through holes in the base of the containment vessel.
Gas-core reactor. A closed loop version of the nuclear lightbulb rocket, where the fissile material is gaseous uranium hexafluoride contained in a fused silica vessel. A working gas (such as hydrogen) would flow around this vessel and absorb the UV light produced by the reaction. This reactor design could also function as a rocket engine, as featured in Harry Harrison's 1976 science-fiction novel Skyfall. In theory, using UF6 as a working fuel directly (rather than as a stage to one, as is done now) would mean lower processing costs, and very small reactors. In practice, running a reactor at such high power densities would probably produce unmanageable neutron flux, weakening most reactor materials, and therefore as the flux would be similar to that expected in fusion reactors, it would require similar materials to those selected by the International Fusion Materials Irradiation Facility.
Gas core EM reactor. As in the gas core reactor, but with photovoltaic arrays converting the UV light directly to electricity. This approach is similar to the experimentally proved photoelectric effect that would convert the X-rays generated from aneutronic fusion into electricity, by passing the high energy photons through an array of conducting foils to transfer some of their energy to electrons, the energy of the photon is captured electrostatically, similar to a capacitor. Since X-rays can go through far greater material thickness than electrons, many hundreds or thousands of layers are needed to absorb the X-rays.
Fission fragment reactor. A fission fragment reactor is a nuclear reactor that generates electricity by decelerating an ion beam of fission byproducts instead of using nuclear reactions to generate heat. By doing so, it bypasses the Carnot cycle and can achieve efficiencies of up to 90% instead of 40–45% attainable by efficient turbine-driven thermal reactors. The fission fragment ion beam would be passed through a magnetohydrodynamic generator to produce electricity.
Hybrid nuclear fusion. Would use the neutrons emitted by fusion to fission a blanket of fertile material, like U-238 or Th-232 and transmute other reactor's spent nuclear fuel/nuclear waste into relatively more benign isotopes.
Fusion reactors
Controlled nuclear fusion could in principle be used in fusion power plants to produce power without the complexities of handling actinides, but significant scientific and technical obstacles remain. Despite research having started in the 1950s, no commercial fusion reactor is expected before 2050. The ITER project is currently leading the effort to harness fusion power.
Nuclear fuel cycle
Thermal reactors generally depend on refined and enriched uranium. Some nuclear reactors can operate with a mixture of plutonium and uranium (see MOX). The process by which uranium ore is mined, processed, enriched, used, possibly reprocessed and disposed of is known as the nuclear fuel cycle.
Under 1% of the uranium found in nature is the easily fissionable U-235 isotope and as a result most reactor designs require enriched fuel.
Enrichment involves increasing the percentage of U-235 and is usually done by means of gaseous diffusion or gas centrifuge. The enriched result is then converted into uranium dioxide powder, which is pressed and fired into pellet form. These pellets are stacked into tubes which are then sealed and called fuel rods. Many of these fuel rods are used in each nuclear reactor.
Most BWR and PWR commercial reactors use uranium enriched to about 4% U-235, and some commercial reactors with a high neutron economy do not require the fuel to be enriched at all (that is, they can use natural uranium). According to the International Atomic Energy Agency there are at least 100 research reactors in the world fueled by highly enriched (weapons-grade/90% enrichment) uranium. Theft risk of this fuel (potentially used in the production of a nuclear weapon) has led to campaigns advocating conversion of this type of reactor to low-enrichment uranium (which poses less threat of proliferation).
Fissile U-235 and non-fissile but fissionable and fertile U-238 are both used in the fission process. U-235 is fissionable by thermal (i.e. slow-moving) neutrons. A thermal neutron is one which is moving about the same speed as the atoms around it. Since all atoms vibrate proportionally to their absolute temperature, a thermal neutron has the best opportunity to fission U-235 when it is moving at this same vibrational speed. On the other hand, U-238 is more likely to capture a neutron when the neutron is moving very fast. This U-239 atom will soon decay into plutonium-239, which is another fuel. Pu-239 is a viable fuel and must be accounted for even when a highly enriched uranium fuel is used. Plutonium fissions will dominate the U-235 fissions in some reactors, especially after the initial loading of U-235 is spent. Plutonium is fissionable with both fast and thermal neutrons, which make it ideal for either nuclear reactors or nuclear bombs.
Most reactor designs in existence are thermal reactors and typically use water as a neutron moderator (moderator means that it slows down the neutron to a thermal speed) and as a coolant. But in a fast breeder reactor, some other kind of coolant is used which will not moderate or slow the neutrons down much. This enables fast neutrons to dominate, which can effectively be used to constantly replenish the fuel supply. By merely placing cheap unenriched uranium into such a core, the non-fissionable U-238 will be turned into Pu-239, "breeding" fuel.
In thorium fuel cycle thorium-232 absorbs a neutron in either a fast or thermal reactor. The thorium-233 beta decays to protactinium-233 and then to uranium-233, which in turn is used as fuel. Hence, like uranium-238, thorium-232 is a fertile material.
Fueling of nuclear reactors
The amount of energy in the reservoir of nuclear fuel is frequently expressed in terms of "full-power days," which is the number of 24-hour periods (days) a reactor is scheduled for operation at full power output for the generation of heat energy. The number of full-power days in a reactor's operating cycle (between refueling outage times) is related to the amount of fissile uranium-235 (U-235) contained in the fuel assemblies at the beginning of the cycle. A higher percentage of U-235 in the core at the beginning of a cycle will permit the reactor to be run for a greater number of full-power days.
At the end of the operating cycle, the fuel in some of the assemblies is "spent", having spent four to six years in the reactor producing power. This spent fuel is discharged and replaced with new (fresh) fuel assemblies. Though considered "spent," these fuel assemblies contain a large quantity of fuel. In practice it is economics that determines the lifetime of nuclear fuel in a reactor. Long before all possible fission has taken place, the reactor is unable to maintain 100%, full output power, and therefore, income for the utility lowers as plant output power lowers. Most nuclear plants operate at a very low profit margin due to operating overhead, mainly regulatory costs, so operating below 100% power is not economically viable for very long. The fraction of the reactor's fuel core replaced during refueling is typically one-third, but depends on how long the plant operates between refueling. Plants typically operate on 18 month refueling cycles, or 24 month refueling cycles. This means that one refueling, replacing only one-third of the fuel, can keep a nuclear reactor at full power for nearly two years.
The disposition and storage of this spent fuel is one of the most challenging aspects of the operation of a commercial nuclear power plant. This nuclear waste is highly radioactive and its toxicity presents a danger for thousands of years. After being discharged from the reactor, spent nuclear fuel is transferred to the on-site spent fuel pool. The spent fuel pool is a large pool of water that provides cooling and shielding of the spent nuclear fuel as well as limit radiation exposure to on-site personnel. Once the energy has decayed somewhat (approximately five years), the fuel can be transferred from the fuel pool to dry shielded casks, that can be safely stored for thousands of years. After loading into dry shielded casks, the casks are stored on-site in a specially guarded facility in impervious concrete bunkers. On-site fuel storage facilities are designed to withstand the impact of commercial airliners, with little to no damage to the spent fuel. An average on-site fuel storage facility can hold 30 years of spent fuel in a space smaller than a football field.
Not all reactors need to be shut down for refueling; for example, pebble bed reactors, RBMK reactors, molten-salt reactors, Magnox, AGR and CANDU reactors allow fuel to be shifted through the reactor while it is running. In a CANDU reactor, this also allows individual fuel elements to be situated within the reactor core that are best suited to the amount of U-235 in the fuel element.
The amount of energy extracted from nuclear fuel is called its burnup, which is expressed in terms of the heat energy produced per initial unit of fuel weight. Burnup is commonly expressed as megawatt days thermal per metric ton of initial heavy metal.
Nuclear safety
Nuclear safety covers the actions taken to prevent nuclear and radiation accidents and incidents or to limit their consequences. The nuclear power industry has improved the safety and performance of reactors, and has proposed new, safer (but generally untested) reactor designs but there is no guarantee that the reactors will be designed, built and operated correctly. Mistakes do occur and the designers of reactors at Fukushima in Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake, despite multiple warnings by the NRG and the Japanese nuclear safety administration. According to UBS AG, the Fukushima I nuclear accidents have cast doubt on whether even an advanced economy like Japan can master nuclear safety. Catastrophic scenarios involving terrorist attacks are also conceivable. An interdisciplinary team from MIT has estimated that given the expected growth of nuclear power from 2005 to 2055, at least four serious nuclear accidents would be expected in that period.
Nuclear accidents
Serious, though rare, nuclear and radiation accidents have occurred. These include the Windscale fire (October 1957), the SL-1 accident (1961), the Three Mile Island accident (1979), Chernobyl disaster (April 1986), and the Fukushima Daiichi nuclear disaster (March 2011). Nuclear-powered submarine mishaps include the K-19 reactor accident (1961), the K-27 reactor accident (1968), and the K-431 reactor accident (1985).
Nuclear reactors have been launched into Earth orbit at least 34 times. A number of incidents connected with the unmanned nuclear-reactor-powered Soviet RORSAT especially Kosmos 954 radar satellite which resulted in nuclear fuel reentering the Earth's atmosphere from orbit and being dispersed in northern Canada (January 1978).
Natural nuclear reactors
Almost two billion years ago a series of self-sustaining nuclear fission "reactors" self-assembled in the area now known as Oklo in Gabon, West Africa. The conditions at that place and time allowed a natural nuclear fission to occur with circumstances that are similar to the conditions in a constructed nuclear reactor. Fifteen fossil natural fission reactors have so far been found in three separate ore deposits at the Oklo uranium mine in Gabon. First discovered in 1972 by French physicist Francis Perrin, they are collectively known as the Oklo Fossil Reactors. Self-sustaining nuclear fission reactions took place in these reactors approximately 1.5 billion years ago, and ran for a few hundred thousand years, averaging 100 kW of power output during that time. The concept of a natural nuclear reactor was theorized as early as 1956 by Paul Kuroda at the University of Arkansas.
Such reactors can no longer form on Earth in its present geologic period. Radioactive decay of formerly more abundant uranium-235 over the time span of hundreds of millions of years has reduced the proportion of this naturally occurring fissile isotope to below the amount required to sustain a chain reaction with only plain water as a moderator.
The natural nuclear reactors formed when a uranium-rich mineral deposit became inundated with groundwater that acted as a neutron moderator, and a strong chain reaction took place. The water moderator would boil away as the reaction increased, slowing it back down again and preventing a meltdown. The fission reaction was sustained for hundreds of thousands of years, cycling on the order of hours to a few days.
These natural reactors are extensively studied by scientists interested in geologic radioactive waste disposal. They offer a case study of how radioactive isotopes migrate through the Earth's crust. This is a significant area of controversy as opponents of geologic waste disposal fear that isotopes from stored waste could end up in water supplies or be carried into the environment.
Emissions
Nuclear reactors produce tritium as part of normal operations, which is eventually released into the environment in trace quantities.
As an isotope of hydrogen, tritium (T) frequently binds to oxygen and forms T2O. This molecule is chemically identical to H2O and so is both colorless and odorless, however the additional neutrons in the hydrogen nuclei cause the tritium to undergo beta decay with a half-life of 12.3 years. Despite being measurable, the tritium released by nuclear power plants is minimal. The United States NRC estimates that a person drinking water for one year out of a well contaminated by what they would consider to be a significant tritiated water spill would receive a radiation dose of 0.3 millirem. For comparison, this is an order of magnitude less than the 4 millirem a person receives on a round trip flight from Washington, D.C. to Los Angeles, a consequence of less atmospheric protection against highly energetic cosmic rays at high altitudes.
The amounts of strontium-90 released from nuclear power plants under normal operations is so low as to be undetectable above natural background radiation. Detectable strontium-90 in ground water and the general environment can be traced to weapons testing that occurred during the mid-20th century (accounting for 99% of the Strontium-90 in the environment) and the Chernobyl accident (accounting for the remaining 1%).
See also
List of nuclear reactors
List of small modular reactor designs
List of United States Naval reactors
Neutron transport
Nuclear decommissioning
Nuclear power by country
Nuclear power in space
One Less Nuclear Power Plant
Radioisotope thermoelectric generator
Safety engineering
Sayonara Nuclear Power Plants
Small modular reactor
Thorium-based nuclear power
Traveling-wave reactor (TWR)
World Nuclear Industry Status Report
Nuclear microreactor
Notes
References
External links
The Database on Nuclear Power Reactors – IAEA.
Uranium Conference adds discussion of Japan accident
A Debate: Is Nuclear Power The Solution to Global Warming?
Union of Concerned Scientists, Concerns re: US nuclear reactor program
Freeview Video 'Nuclear Power Plants – What's the Problem' A Royal Institution Lecture by John Collier by the Vega Science Trust.
Nuclear Energy Institute – How it Works: Electric Power Generation.
Annotated bibliography of nuclear reactor technology from the Alsos Digital Library
ソヴィエト連邦における宇宙用原子炉の開発とその実用.
Energy conversion
Nuclear technology
Power station technology
Pressure vessels
Nuclear research reactors
Nuclear power reactor types
Neutron sources | Nuclear reactor | [
"Physics",
"Chemistry",
"Engineering"
] | 12,219 | [
"Structural engineering",
"Chemical equipment",
"Nuclear technology",
"Physical systems",
"Hydraulics",
"Nuclear physics",
"Pressure vessels"
] |
22,153 | https://en.wikipedia.org/wiki/Nuclear%20power | Nuclear power is the use of nuclear reactions to produce electricity. Nuclear power can be obtained from nuclear fission, nuclear decay and nuclear fusion reactions. Presently, the vast majority of electricity from nuclear power is produced by nuclear fission of uranium and plutonium in nuclear power plants. Nuclear decay processes are used in niche applications such as radioisotope thermoelectric generators in some space probes such as Voyager 2. Reactors producing controlled fusion power have been operated since 1958 but have yet to generate net power and are not expected to be commercially available in the near future.
The first nuclear power plant was built in the 1950s. The global installed nuclear capacity grew to 100GW in the late 1970s, and then expanded during the 1980s, reaching 300GW by 1990. The 1979 Three Mile Island accident in the United States and the 1986 Chernobyl disaster in the Soviet Union resulted in increased regulation and public opposition to nuclear power plants. Nuclear power plants supplied 2,602 terawatt hours (TWh) of electricity in 2023, equivalent to about 9% of global electricity generation, and were the second largest low-carbon power source after hydroelectricity. there are 415 civilian fission reactors in the world, with overall capacity of 374GW, 66 under construction and 87 planned, with a combined capacity of 72GW and 84GW, respectively. The United States has the largest fleet of nuclear reactors, generating almost 800TWh of low-carbon electricity per year with an average capacity factor of 92%. The average global capacity factor is 89%. Most new reactors under construction are generation III reactors in Asia.
Nuclear power is a safe, sustainable energy source that reduces carbon emissions. This is because nuclear power generation causes one of the lowest levels of fatalities per unit of energy generated compared to other energy sources. "Economists estimate that each nuclear plant built could save more than 800,000 life years." Coal, petroleum, natural gas and hydroelectricity have each caused more fatalities per unit of energy due to air pollution and accidents. Nuclear power plants also emit no greenhouse gases and result in less life-cycle carbon emissions than common "renewables". The radiological hazards associated with nuclear power are the primary motivations of the anti-nuclear movement, which contends that nuclear power poses threats to people and the environment, citing the potential for accidents like the Fukushima nuclear disaster in Japan in 2011, and is too expensive to deploy when compared to alternative sustainable energy sources.
History
Origins
The process of nuclear fission was discovered in 1938 after over four decades of work on the science of radioactivity and the elaboration of new nuclear physics that described the components of atoms. Soon after the discovery of the fission process, it was realized that neutrons released by a fissioning nucleus could, under the right conditions, induce fissions in nearby nuclei, thus initiating a self-sustaining chain reaction. Once this was experimentally confirmed in 1939, scientists in many countries petitioned their governments for support for nuclear fission research, just on the cusp of World War II, in order to develop a nuclear weapon.
In the United States, these research efforts led to the creation of the first human-made nuclear reactor, the Chicago Pile-1 under the Stagg Field stadium at the University of Chicago, which achieved criticality on December 2, 1942. The reactor's development was part of the Manhattan Project, the Allied effort to create atomic bombs during World War II. It led to the building of larger single-purpose production reactors for the production of weapons-grade plutonium for use in the first nuclear weapons. The United States tested the first nuclear weapon in July 1945, the Trinity test, and the atomic bombings of Hiroshima and Nagasaki happened one month later.
Despite the military nature of the first nuclear devices, there was strong optimism in the 1940s and 1950s that nuclear power could provide cheap and endless energy. Electricity was generated for the first time by a nuclear reactor on December 20, 1951, at the EBR-I experimental station near Arco, Idaho, which initially produced about 100kW. In 1953, American President Dwight Eisenhower gave his "Atoms for Peace" speech at the United Nations, emphasizing the need to develop "peaceful" uses of nuclear power quickly. This was followed by the Atomic Energy Act of 1954 which allowed rapid declassification of U.S. reactor technology and encouraged development by the private sector.
First power generation
The first organization to develop practical nuclear power was the U.S. Navy, with the S1W reactor for the purpose of propelling submarines and aircraft carriers. The first nuclear-powered submarine, , was put to sea in January 1954. The S1W reactor was a pressurized water reactor. This design was chosen because it was simpler, more compact, and easier to operate compared to alternative designs, thus more suitable to be used in submarines. This decision would result in the PWR being the reactor of choice also for power generation, thus having a lasting impact on the civilian electricity market in the years to come.
On June 27, 1954, the Obninsk Nuclear Power Plant in the USSR became the world's first nuclear power plant to generate electricity for a power grid, producing around 5 megawatts of electric power. The world's first commercial nuclear power station, Calder Hall at Windscale, England was connected to the national power grid on 27 August 1956. In common with a number of other generation I reactors, the plant had the dual purpose of producing electricity and plutonium-239, the latter for the nascent nuclear weapons program in Britain.
Expansion and first opposition
The total global installed nuclear capacity initially rose relatively quickly, rising from less than 1 gigawatt (GW) in 1960 to 100GW in the late 1970s. During the 1970s and 1980s rising economic costs (related to extended construction times largely due to regulatory changes and pressure-group litigation) and falling fossil fuel prices made nuclear power plants then under construction less attractive. In the 1980s in the U.S. and 1990s in Europe, the flat electric grid growth and electricity liberalization also made the addition of large new baseload energy generators economically unattractive.
The 1973 oil crisis had a significant effect on countries, such as France and Japan, which had relied more heavily on oil for electric generation to invest in nuclear power. France would construct 25 nuclear power plants over the next 15 years, and as of 2019, 71% of French electricity was generated by nuclear power, the highest percentage by any nation in the world.
Some local opposition to nuclear power emerged in the United States in the early 1960s. In the late 1960s, some members of the scientific community began to express pointed concerns. These anti-nuclear concerns related to nuclear accidents, nuclear proliferation, nuclear terrorism and radioactive waste disposal. In the early 1970s, there were large protests about a proposed nuclear power plant in Wyhl, Germany. The project was cancelled in 1975. The anti-nuclear success at Wyhl inspired opposition to nuclear power in other parts of Europe and North America.
By the mid-1970s anti-nuclear activism gained a wider appeal and influence, and nuclear power began to become an issue of major public protest. In some countries, the nuclear power conflict "reached an intensity unprecedented in the history of technology controversies". The increased public hostility to nuclear power led to a longer license procurement process, more regulations and increased requirements for safety equipment, which made new construction much more expensive. In the United States, over 120 Light Water Reactor proposals were ultimately cancelled and the construction of new reactors ground to a halt. The 1979 accident at Three Mile Island with no fatalities, played a major part in the reduction in the number of new plant constructions in many countries.
Chernobyl and renaissance
During the 1980s one new nuclear reactor started up every 17 days on average. By the end of the decade, global installed nuclear capacity reached 300GW. Since the late 1980s, new capacity additions slowed significantly, with the installed nuclear capacity reaching 366GW in 2005.
The 1986 Chernobyl disaster in the USSR, involving an RBMK reactor, altered the development of nuclear power and led to a greater focus on meeting international safety and regulatory standards. It is considered the worst nuclear disaster in history both in total casualties, with 56 direct deaths, and financially, with the cleanup and the cost estimated at 18billionRbls (US$68billion in 2019, adjusted for inflation). The international organization to promote safety awareness and the professional development of operators in nuclear facilities, the World Association of Nuclear Operators (WANO), was created as a direct outcome of the 1986 Chernobyl accident. The Chernobyl disaster played a major part in the reduction in the number of new plant constructions in the following years. Influenced by these events, Italy voted against nuclear power in a 1987 referendum, becoming the first country to completely phase out nuclear power in 1990.
In the early 2000s, nuclear energy was expecting a nuclear renaissance, an increase in the construction of new reactors, due to concerns about carbon dioxide emissions. During this period, newer generation III reactors, such as the EPR began construction.
Fukushima accident
Prospects of a nuclear renaissance were delayed by another nuclear accident. The 2011 Fukushima Daiichi nuclear accident was caused by the Tōhoku earthquake and tsunami, one of the largest earthquakes ever recorded. The Fukushima Daiichi Nuclear Power Plant suffered three core meltdowns due to failure of the emergency cooling system for lack of electricity supply. This resulted in the most serious nuclear accident since the Chernobyl disaster.
The accident prompted a re-examination of nuclear safety and nuclear energy policy in many countries. Germany approved plans to close all its reactors by 2022, and many other countries reviewed their nuclear power programs. Following the disaster, Japan shut down all of its nuclear power reactors, some of them permanently, and in 2015 began a gradual process to restart the remaining 40 reactors, following safety checks and based on revised criteria for operations and public approval.
In 2022, the Japanese government, under the leadership of Prime Minister Fumio Kishida, declared that 10 more nuclear power plants were to be reopened since the 2011 disaster. Kishida is also pushing for research and construction of new safer nuclear plants to safeguard Japanese consumers from the fluctuating price of the fossil fuel market and reduce Japan's greenhouse gas emissions. Kishida intends to have Japan become a significant exporter of nuclear energy and technology to developing countries around the world.
Current prospects
By 2015, the IAEA's outlook for nuclear energy had become more promising, recognizing the importance of low-carbon generation for mitigating climate change. , the global trend was for new nuclear power stations coming online to be balanced by the number of old plants being retired. In 2016, the U.S. Energy Information Administration projected for its "base case" that world nuclear power generation would increase from 2,344 terawatt hours (TWh) in 2012 to 4,500TWh in 2040. Most of the predicted increase was expected to be in Asia. As of 2018, there were over 150 nuclear reactors planned including 50 under construction. In January 2019, China had 45 reactors in operation, 13 under construction, and planned to build 43 more, which would make it the world's largest generator of nuclear electricity. As of 2021, 17 reactors were reported to be under construction. China built significantly fewer reactors than originally planned. Its share of electricity from nuclear power was 5% in 2019 and observers have cautioned that, along with the risks, the changing economics of energy generation may cause new nuclear energy plants to "no longer make sense in a world that is leaning toward cheaper, more reliable renewable energy".
In October 2021, the Japanese cabinet approved the new Plan for Electricity Generation to 2030 prepared by the Agency for Natural Resources and Energy (ANRE) and an advisory committee, following public consultation. The nuclear target for 2030 requires the restart of another ten reactors. Prime Minister Fumio Kishida in July 2022 announced that the country should consider building advanced reactors and extending operating licences beyond 60 years.
As of 2022, with world oil and gas prices on the rise, while Germany is restarting its coal plants to deal with loss of Russian gas that it needs to supplement its , many other countries have announced ambitious plans to reinvigorate ageing nuclear generating capacity with new investments. French President Emmanuel Macron announced his intention to build six new reactors in coming decades, placing nuclear at the heart of France's drive for carbon neutrality by 2050. Meanwhile, in the United States, the Department of Energy, in collaboration with commercial entities, TerraPower and X-energy, is planning on building two different advanced nuclear reactors by 2027, with further plans for nuclear implementation in its long term green energy and energy security goals.
Power plants
Nuclear power plants are thermal power stations that generate electricity by harnessing the thermal energy released from nuclear fission. A fission nuclear power plant is generally composed of: a nuclear reactor, in which the nuclear reactions generating heat take place; a cooling system, which removes the heat from inside the reactor; a steam turbine, which transforms the heat into mechanical energy; an electric generator, which transforms the mechanical energy into electrical energy.
When a neutron hits the nucleus of a uranium-235 or plutonium atom, it can split the nucleus into two smaller nuclei, which is a nuclear fission reaction. The reaction releases energy and neutrons. The released neutrons can hit other uranium or plutonium nuclei, causing new fission reactions, which release more energy and more neutrons. This is called a chain reaction. In most commercial reactors, the reaction rate is contained by control rods that absorb excess neutrons. The controllability of nuclear reactors depends on the fact that a small fraction of neutrons resulting from fission are delayed. The time delay between the fission and the release of the neutrons slows changes in reaction rates and gives time for moving the control rods to adjust the reaction rate.
Fuel cycle
The life cycle of nuclear fuel starts with uranium mining. The uranium ore is then converted into a compact ore concentrate form, known as yellowcake (U3O8), to facilitate transport. Fission reactors generally need uranium-235, a fissile isotope of uranium. The concentration of uranium-235 in natural uranium is low (about 0.7%). Some reactors can use this natural uranium as fuel, depending on their neutron economy. These reactors generally have graphite or heavy water moderators. For light water reactors, the most common type of reactor, this concentration is too low, and it must be increased by a process called uranium enrichment. In civilian light water reactors, uranium is typically enriched to 3.55% uranium-235. The uranium is then generally converted into uranium oxide (UO2), a ceramic, that is then compressively sintered into fuel pellets, a stack of which forms fuel rods of the proper composition and geometry for the particular reactor.
After some time in the reactor, the fuel will have reduced fissile material and increased fission products, until its use becomes impractical. At this point, the spent fuel will be moved to a spent fuel pool which provides cooling for the thermal heat and shielding for ionizing radiation. After several months or years, the spent fuel is radioactively and thermally cool enough to be moved to dry storage casks or reprocessed.
Uranium resources
Uranium is a fairly common element in the Earth's crust: it is approximately as common as tin or germanium, and is about 40 times more common than silver. Uranium is present in trace concentrations in most rocks, dirt, and ocean water, but is generally economically extracted only where it is present in relatively high concentrations. Uranium mining can be underground, open-pit, or in-situ leach mining. An increasing number of the highest output mines are remote underground operations, such as McArthur River uranium mine, in Canada, which by itself accounts for 13% of global production. As of 2011 the world's known resources of uranium, economically recoverable at the arbitrary price ceiling of US$130/kg, were enough to last for between 70 and 100 years. In 2007, the OECD estimated 670 years of economically recoverable uranium in total conventional resources and phosphate ores assuming the then-current use rate.
Light water reactors make relatively inefficient use of nuclear fuel, mostly using only the very rare uranium-235 isotope. Nuclear reprocessing can make this waste reusable, and newer reactors also achieve a more efficient use of the available resources than older ones. With a pure fast reactor fuel cycle with a burn up of all the uranium and actinides (which presently make up the most hazardous substances in nuclear waste), there is an estimated 160,000 years worth of uranium in total conventional resources and phosphate ore at the price of 60–100 US$/kg. However, reprocessing is expensive, possibly dangerous and can be used to manufacture nuclear weapons. One analysis found that uranium prices could increase by two orders of magnitude between 2035 and 2100 and that there could be a shortage near the end of the century. A 2017 study by researchers from MIT and WHOI found that "at the current consumption rate, global conventional reserves of terrestrial uranium (approximately 7.6 million tonnes) could be depleted in a little over a century". Limited uranium-235 supply may inhibit substantial expansion with the current nuclear technology. While various ways to reduce dependence on such resources are being explored, new nuclear technologies are considered to not be available in time for climate change mitigation purposes or competition with alternatives of renewables in addition to being more expensive and require costly research and development. A study found it to be uncertain whether identified resources will be developed quickly enough to provide uninterrupted fuel supply to expanded nuclear facilities and various forms of mining may be challenged by ecological barriers, costs, and land requirements. Researchers also report considerable import dependence of nuclear energy.
Unconventional uranium resources also exist. Uranium is naturally present in seawater at a concentration of about 3 micrograms per liter, with 4.4 billion tons of uranium considered present in seawater at any time. In 2014 it was suggested that it would be economically competitive to produce nuclear fuel from seawater if the process was implemented at large scale. Like fossil fuels, over geological timescales, uranium extracted on an industrial scale from seawater would be replenished by both river erosion of rocks and the natural process of uranium dissolved from the surface area of the ocean floor, both of which maintain the solubility equilibria of seawater concentration at a stable level. Some commentators have argued that this strengthens the case for nuclear power to be considered a renewable energy.
Waste
The normal operation of nuclear power plants and facilities produce radioactive waste, or nuclear waste. This type of waste is also produced during plant decommissioning. There are two broad categories of nuclear waste: low-level waste and high-level waste. The first has low radioactivity and includes contaminated items such as clothing, which poses limited threat. High-level waste is mainly the spent fuel from nuclear reactors, which is very radioactive and must be cooled and then safely disposed of or reprocessed.
High-level waste
The most important waste stream from nuclear power reactors is spent nuclear fuel, which is considered high-level waste. For Light Water Reactors (LWRs), spent fuel is typically composed of 95% uranium, 4% fission products, and about 1% transuranic actinides (mostly plutonium, neptunium and americium). The fission products are responsible for the bulk of the short-term radioactivity, whereas the plutonium and other transuranics are responsible for the bulk of the long-term radioactivity.
High-level waste (HLW) must be stored isolated from the biosphere with sufficient shielding so as to limit radiation exposure. After being removed from the reactors, used fuel bundles are stored for six to ten years in spent fuel pools, which provide cooling and shielding against radiation. After that, the fuel is cool enough that it can be safely transferred to dry cask storage. The radioactivity decreases exponentially with time, such that it will have decreased by 99.5% after 100 years. The more intensely radioactive short-lived fission products (SLFPs) decay into stable elements in approximately 300 years, and after about 100,000 years, the spent fuel becomes less radioactive than natural uranium ore.
Commonly suggested methods to isolate LLFP waste from the biosphere include separation and transmutation, synroc treatments, or deep geological storage.
Thermal-neutron reactors, which presently constitute the majority of the world fleet, cannot burn up the reactor grade plutonium that is generated during the reactor operation. This limits the life of nuclear fuel to a few years. In some countries, such as the United States, spent fuel is classified in its entirety as a nuclear waste. In other countries, such as France, it is largely reprocessed to produce a partially recycled fuel, known as mixed oxide fuel or MOX. For spent fuel that does not undergo reprocessing, the most concerning isotopes are the medium-lived transuranic elements, which are led by reactor-grade plutonium (half-life 24,000 years). Some proposed reactor designs, such as the integral fast reactor and molten salt reactors, can use as fuel the plutonium and other actinides in spent fuel from light water reactors, thanks to their fast fission spectrum. This offers a potentially more attractive alternative to deep geological disposal.
The thorium fuel cycle results in similar fission products, though creates a much smaller proportion of transuranic elements from neutron capture events within a reactor. Spent thorium fuel, although more difficult to handle than spent uranium fuel, may present somewhat lower proliferation risks.
Low-level waste
The nuclear industry also produces a large volume of low-level waste, with low radioactivity, in the form of contaminated items like clothing, hand tools, water purifier resins, and (upon decommissioning) the materials of which the reactor itself is built. Low-level waste can be stored on-site until radiation levels are low enough to be disposed of as ordinary waste, or it can be sent to a low-level waste disposal site.
Waste relative to other types
In countries with nuclear power, radioactive wastes account for less than 1% of total industrial toxic wastes, much of which remains hazardous for long periods. Overall, nuclear power produces far less waste material by volume than fossil-fuel based power plants. Coal-burning plants, in particular, produce large amounts of toxic and mildly radioactive ash resulting from the concentration of naturally occurring radioactive materials in coal. A 2008 report from Oak Ridge National Laboratory concluded that coal power actually results in more radioactivity being released into the environment than nuclear power operation, and that the population effective dose equivalent from radiation from coal plants is 100 times that from the operation of nuclear plants. Although coal ash is much less radioactive than spent nuclear fuel by weight, coal ash is produced in much higher quantities per unit of energy generated. It is also released directly into the environment as fly ash, whereas nuclear plants use shielding to protect the environment from radioactive materials.
Nuclear waste volume is small compared to the energy produced. For example, at Yankee Rowe Nuclear Power Station, which generated 44 billion kilowatt hours of electricity when in service, its complete spent fuel inventory is contained within sixteen casks. It is estimated that to produce a lifetime supply of energy for a person at a western standard of living (approximately 3GWh) would require on the order of the volume of a soda can of low enriched uranium, resulting in a similar volume of spent fuel generated.
Waste disposal
Following interim storage in a spent fuel pool, the bundles of used fuel rod assemblies of a typical nuclear power station are often stored on site in dry cask storage vessels. Presently, waste is mainly stored at individual reactor sites and there are over 430 locations around the world where radioactive material continues to accumulate.
Disposal of nuclear waste is often considered the most politically divisive aspect in the lifecycle of a nuclear power facility. The lack of movement of nuclear waste in the 2 billion year old natural nuclear fission reactors in Oklo, Gabon is cited as "a source of essential information today." Experts suggest that centralized underground repositories which are well-managed, guarded, and monitored, would be a vast improvement. There is an "international consensus on the advisability of storing nuclear waste in deep geological repositories". With the advent of new technologies, other methods including horizontal drillhole disposal into geologically inactive areas have been proposed.
There are no commercial scale purpose built underground high-level waste repositories in operation. However, in Finland the Onkalo spent nuclear fuel repository of the Olkiluoto Nuclear Power Plant was under construction as of 2015.
Reprocessing
Most thermal-neutron reactors run on a once-through nuclear fuel cycle, mainly due to the low price of fresh uranium. However, many reactors are also fueled with recycled fissionable materials that remain in spent nuclear fuel. The most common fissionable material that is recycled is the reactor-grade plutonium (RGPu) that is extracted from spent fuel. It is mixed with uranium oxide and fabricated into mixed-oxide or MOX fuel. Because thermal LWRs remain the most common reactor worldwide, this type of recycling is the most common. It is considered to increase the sustainability of the nuclear fuel cycle, reduce the attractiveness of spent fuel to theft, and lower the volume of high level nuclear waste. Spent MOX fuel cannot generally be recycled for use in thermal-neutron reactors. This issue does not affect fast-neutron reactors, which are therefore preferred in order to achieve the full energy potential of the original uranium.
The main constituent of spent fuel from LWRs is slightly enriched uranium. This can be recycled into reprocessed uranium (RepU), which can be used in a fast reactor, used directly as fuel in CANDU reactors, or re-enriched for another cycle through an LWR. Re-enriching of reprocessed uranium is common in France and Russia. Reprocessed uranium is also safer in terms of nuclear proliferation potential.
Reprocessing has the potential to recover up to 95% of the uranium and plutonium fuel in spent nuclear fuel, as well as reduce long-term radioactivity within the remaining waste. However, reprocessing has been politically controversial because of the potential for nuclear proliferation and varied perceptions of increasing the vulnerability to nuclear terrorism. Reprocessing also leads to higher fuel cost compared to the once-through fuel cycle. While reprocessing reduces the volume of high-level waste, it does not reduce the fission products that are the primary causes of residual heat generation and radioactivity for the first few centuries outside the reactor. Thus, reprocessed waste still requires an almost identical treatment for the initial first few hundred years.
Reprocessing of civilian fuel from power reactors is currently done in France, the United Kingdom, Russia, Japan, and India. In the United States, spent nuclear fuel is currently not reprocessed. The La Hague reprocessing facility in France has operated commercially since 1976 and is responsible for half the world's reprocessing as of 2010. It produces MOX fuel from spent fuel derived from several countries. More than 32,000 tonnes of spent fuel had been reprocessed as of 2015, with the majority from France, 17% from Germany, and 9% from Japan.
Breeding
Breeding is the process of converting non-fissile material into fissile material that can be used as nuclear fuel. The non-fissile material that can be used for this process is called fertile material, and constitute the vast majority of current nuclear waste. This breeding process occurs naturally in breeder reactors. As opposed to light water thermal-neutron reactors, which use uranium-235 (0.7% of all natural uranium), fast-neutron breeder reactors use uranium-238 (99.3% of all natural uranium) or thorium. A number of fuel cycles and breeder reactor combinations are considered to be sustainable or renewable sources of energy. In 2006 it was estimated that with seawater extraction, there was likely five billion years' worth of uranium resources for use in breeder reactors.
Breeder technology has been used in several reactors, but as of 2006, the high cost of reprocessing fuel safely requires uranium prices of more than US$200/kg before becoming justified economically. Breeder reactors are however being developed for their potential to burn all of the actinides (the most active and dangerous components) in the present inventory of nuclear waste, while also producing power and creating additional quantities of fuel for more reactors via the breeding process. As of 2017, there are two breeders producing commercial power, BN-600 reactor and the BN-800 reactor, both in Russia. The Phénix breeder reactor in France was powered down in 2009 after 36 years of operation. Both China and India are building breeder reactors. The Indian 500 MWe Prototype Fast Breeder Reactor is in the commissioning phase, with plans to build more.
Another alternative to fast-neutron breeders are thermal-neutron breeder reactors that use uranium-233 bred from thorium as fission fuel in the thorium fuel cycle. Thorium is about 3.5 times more common than uranium in the Earth's crust, and has different geographic characteristics. India's three-stage nuclear power programme features the use of a thorium fuel cycle in the third stage, as it has abundant thorium reserves but little uranium.
Decommissioning
Nuclear decommissioning is the process of dismantling a nuclear facility to the point that it no longer requires measures for radiation protection, returning the facility and its parts to a safe enough level to be entrusted for other uses. Due to the presence of radioactive materials, nuclear decommissioning presents technical and economic challenges. The costs of decommissioning are generally spread over the lifetime of a facility and saved in a decommissioning fund.
Production
Civilian nuclear power supplied 2,602 terawatt hours (TWh) of electricity in 2023, equivalent to about 9% of global electricity generation, and was the second largest low-carbon power source after hydroelectricity. Nuclear power's contribution to global energy production was about 4% in 2023. This is a little more than wind power, which provided 3.5% of global energy in 2023. Nuclear power's share of global electricity production has fallen from 16.5% in 1997, in large part because the economics of nuclear power have become more difficult.
there are 415 civilian fission reactors in the world, with a combined electrical capacity of 374 gigawatt (GW). There are also 66 nuclear power reactors under construction and 87 reactors planned, with a combined capacity of 72GW and 84GW, respectively. The United States has the largest fleet of nuclear reactors, generating over 800TWh per year with an average capacity factor of 92%. Most reactors under construction are generation III reactors in Asia.
Regional differences in the use of nuclear power are large. The United States produces the most nuclear energy in the world, with nuclear power providing 19% of the electricity it consumes, while France produces the highest percentage of its electrical energy from nuclear reactors65% in 2023. In the European Union, nuclear power provides 22% of the electricity as of 2022.
Nuclear power is the single largest low-carbon electricity source in the United States, and accounts for about half of the European Union's low-carbon electricity. Nuclear energy policy differs among European Union countries, and some, such as Austria, Estonia, Ireland and Italy, have no active nuclear power stations.
In addition, there were approximately 140 naval vessels using nuclear propulsion in operation, powered by about 180 reactors. These include military and some civilian ships, such as nuclear-powered icebreakers.
International research is continuing into additional uses of process heat such as hydrogen production (in support of a hydrogen economy), for desalinating sea water, and for use in district heating systems.
Economics
The economics of new nuclear power plants is a controversial subject and multi-billion-dollar investments depend on the choice of energy sources. Nuclear power plants typically have high capital costs for building the plant. For this reason, comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear plants. Fuel costs account for about 30 percent of the operating costs, while prices are subject to the market.
The high cost of construction is one of the biggest challenges for nuclear power plants. A new 1,100MW plant is estimated to cost between US$6 billion to US$9 billion. Nuclear power cost trends show large disparity by nation, design, build rate and the establishment of familiarity in expertise. The only two nations for which data is available that saw cost decreases in the 2000s were India and South Korea.
Analysis of the economics of nuclear power must also take into account who bears the risks of future uncertainties. As of 2010, all operating nuclear power plants have been developed by state-owned or regulated electric utility monopolies. Many countries have since liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants.
The levelized cost of electricity (LCOE) from a new nuclear power plant is estimated to be 69USD/MWh, according to an analysis by the International Energy Agency and the OECD Nuclear Energy Agency. This represents the median cost estimate for an nth-of-a-kind nuclear power plant to be completed in 2025, at a discount rate of 7%. Nuclear power was found to be the least-cost option among dispatchable technologies. Variable renewables can generate cheaper electricity: the median cost of onshore wind power was estimated to be 50USD/MWh, and utility-scale solar power 56USD/MWh. At the assumed CO2 emission cost of 30USD/ton, power from coal (88USD/MWh) and gas (71USD/MWh) is more expensive than low-carbon technologies. Electricity from long-term operation of nuclear power plants by lifetime extension was found to be the least-cost option, at 32USD/MWh.
Measures to mitigate global warming, such as a carbon tax or carbon emissions trading, may favor the economics of nuclear power. Extreme weather events, including events made more severe by climate change, are decreasing all energy source reliability including nuclear energy by a small degree, depending on location siting.
New small modular reactors, such as those developed by NuScale Power, are aimed at reducing the investment costs for new construction by making the reactors smaller and modular, so that they can be built in a factory.
Certain designs had considerable early positive economics, such as the CANDU, which realized a much higher capacity factor and reliability when compared to generation II light water reactors up to the 1990s.
Nuclear power plants, though capable of some grid-load following, are typically run as much as possible to keep the cost of the generated electrical energy as low as possible, supplying mostly base-load electricity. Due to the on-line refueling reactor design, PHWRs (of which the CANDU design is a part) continue to hold many world record positions for longest continual electricity generation, often over 800 days. The specific record as of 2019 is held by a PHWR at Kaiga Atomic Power Station, generating electricity continuously for 962 days.
Costs not considered in LCOE calculations include funds for research and development, and disasters (the Fukushima disaster is estimated to cost taxpayers ≈$187 billion). In some cases, Governments were found to force "consumers to pay upfront for potential cost overruns" or subsidize uneconomic nuclear energy or be required to do so. Nuclear operators are liable to pay for the waste management in the European Union. In the U.S., the Congress reportedly decided 40 years ago that the nation, and not private companies, would be responsible for storing radioactive waste with taxpayers paying for the costs. The World Nuclear Waste Report 2019 found that "even in countries in which the polluter-pays-principle is a legal requirement, it is applied incompletely" and notes the case of the German Asse II deep geological disposal facility, where the retrieval of large amounts of waste has to be paid for by taxpayers. Similarly, other forms of energy, including fossil fuels and renewables, have a portion of their costs covered by governments.
Use in space
The most common use of nuclear power in space is the use of radioisotope thermoelectric generators, which use radioactive decay to generate power. These power generators are relatively small scale (few kW), and they are mostly used to power space missions and experiments for long periods where solar power is not available in sufficient quantity, such as in the Voyager 2 space probe. A few space vehicles have been launched using nuclear reactors: 34 reactors belong to the Soviet RORSAT series and one was the American SNAP-10A.
Both fission and fusion appear promising for space propulsion applications, generating higher mission velocities with less reaction mass.
Safety
Nuclear power plants have three unique characteristics that affect their safety, as compared to other power plants. Firstly, intensely radioactive materials are present in a nuclear reactor. Their release to the environment could be hazardous. Secondly, the fission products, which make up most of the intensely radioactive substances in the reactor, continue to generate a significant amount of decay heat even after the fission chain reaction has stopped. If the heat cannot be removed from the reactor, the fuel rods may overheat and release radioactive materials. Thirdly, a criticality accident (a rapid increase of the reactor power) is possible in certain reactor designs if the chain reaction cannot be controlled. These three characteristics have to be taken into account when designing nuclear reactors.
All modern reactors are designed so that an uncontrolled increase of the reactor power is prevented by natural feedback mechanisms, a concept known as negative void coefficient of reactivity. If the temperature or the amount of steam in the reactor increases, the fission rate inherently decreases. The chain reaction can also be manually stopped by inserting control rods into the reactor core. Emergency core cooling systems (ECCS) can remove the decay heat from the reactor if normal cooling systems fail. If the ECCS fails, multiple physical barriers limit the release of radioactive materials to the environment even in the case of an accident. The last physical barrier is the large containment building.
With a death rate of 0.03 per TWh, nuclear power is the second safest energy source per unit of energy generated, after solar power, in terms of mortality when the historical track-record is considered. Energy produced by coal, petroleum, natural gas and hydropower has caused more deaths per unit of energy generated due to air pollution and energy accidents. This is found when comparing the immediate deaths from other energy sources to both the immediate and the latent, or predicted, indirect cancer deaths from nuclear energy accidents. When the direct and indirect fatalities (including fatalities resulting from the mining and air pollution) from nuclear power and fossil fuels are compared, the use of nuclear power has been calculated to have prevented about 1.84 million deaths from air pollution between 1971 and 2009, by reducing the proportion of energy that would otherwise have been generated by fossil fuels. Following the 2011 Fukushima nuclear disaster, it has been estimated that if Japan had never adopted nuclear power, accidents and pollution from coal or gas plants would have caused more lost years of life.
Serious impacts of nuclear accidents are often not directly attributable to radiation exposure, but rather social and psychological effects. Evacuation and long-term displacement of affected populations created problems for many people, especially the elderly and hospital patients. Forced evacuation from a nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, and suicide. A comprehensive 2005 study on the aftermath of the Chernobyl disaster concluded that the mental health impact is the largest public health problem caused by the accident. Frank N. von Hippel, an American scientist, commented that a disproportionate fear of ionizing radiation (radiophobia) could have long-term psychological effects on the population of contaminated areas following the Fukushima disaster.
Accidents
Some serious nuclear and radiation accidents have occurred. The severity of nuclear accidents is generally classified using the International Nuclear Event Scale (INES) introduced by the International Atomic Energy Agency (IAEA). The scale ranks anomalous events or accidents on a scale from 0 (a deviation from normal operation that poses no safety risk) to 7 (a major accident with widespread effects). There have been three accidents of level 5 or higher in the civilian nuclear power industry, two of which, the Chernobyl accident and the Fukushima accident, are ranked at level 7.
The first major nuclear accidents were the Kyshtym disaster in the Soviet Union and the Windscale fire in the United Kingdom, both in 1957. The first major accident at a nuclear reactor in the USA occurred in 1961 at the SL-1, a U.S. Army experimental nuclear power reactor at the Idaho National Laboratory. An uncontrolled chain reaction resulted in a steam explosion which killed the three crew members and caused a meltdown. Another serious accident happened in 1968, when one of the two liquid-metal-cooled reactors on board the underwent a fuel element failure, with the emission of gaseous fission products into the surrounding air, resulting in 9 crew fatalities and 83 injuries.
The Fukushima Daiichi nuclear accident was caused by the 2011 Tohoku earthquake and tsunami. The accident has not caused any radiation-related deaths but resulted in radioactive contamination of surrounding areas. The difficult cleanup operation is expected to cost tens of billions of dollars over 40 or more years. The Three Mile Island accident in 1979 was a smaller scale accident, rated at INES level 5. There were no direct or indirect deaths caused by the accident.
The impact of nuclear accidents is controversial. According to Benjamin K. Sovacool, fission energy accidents ranked first among energy sources in terms of their total economic cost, accounting for 41% of all property damage attributed to energy accidents. Another analysis found that coal, oil, liquid petroleum gas and hydroelectric accidents (primarily due to the Banqiao Dam disaster) have resulted in greater economic impacts than nuclear power accidents. The study compares latent cancer deaths attributable to nuclear power with immediate deaths from other energy sources per unit of energy generated, and does not include fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident" (an accident with more than five fatalities) classification. The Chernobyl accident in 1986 caused approximately 50 deaths from direct and indirect effects, and some temporary serious injuries from acute radiation syndrome. The future predicted mortality from increases in cancer rates is estimated at 4000 in the decades to come. However, the costs have been large and are increasing.
Nuclear power works under an insurance framework that limits or structures accident liabilities in accordance with national and international conventions. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity. This cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a study by the Congressional Budget Office in the United States. These beyond-regular insurance costs for worst-case scenarios are not unique to nuclear power. Hydroelectric power plants are similarly not fully insured against a catastrophic event such as dam failures. For example, the failure of the Banqiao Dam caused the death of an estimated 30,000 to 200,000 people, and 11 million people lost their homes. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state.
Attacks and sabotage
Terrorists could target nuclear power plants in an attempt to release radioactive contamination into the community. The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. An attack on a reactor's spent fuel pool could also be serious, as these pools are less protected than the reactor core. The release of radioactivity could lead to thousands of near-term deaths and greater numbers of long-term fatalities.
In the United States, the Nuclear Regulatory Commission carries out "Force on Force" (FOF) exercises at all nuclear power plant sites at least once every three years. In the United States, plants are surrounded by a double row of tall fences which are electronically monitored. The plant grounds are patrolled by a sizeable force of armed guards.
Insider sabotage is also a threat because insiders can observe and work around security measures. Successful insider crimes depended on the perpetrators' observation and knowledge of security vulnerabilities. A fire caused 5–10 million dollars worth of damage to New York's Indian Point Energy Center in 1971. The arsonist was a plant maintenance worker.
Proliferation
Nuclear proliferation is the spread of nuclear weapons, fissionable material, and weapons-related nuclear technology to states that do not already possess nuclear weapons. Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that they can also be used to make nuclear weapons. For this reason, nuclear power presents proliferation risks.
Nuclear power program can become a route leading to a nuclear weapon. An example of this is the concern over Iran's nuclear program. The re-purposing of civilian nuclear industries for military purposes would be a breach of the Non-Proliferation Treaty, to which 190 countries adhere. As of April 2012, there are thirty one countries that have civil nuclear power plants, of which nine have nuclear weapons. The vast majority of these nuclear weapons states have produced weapons before commercial nuclear power stations.
A fundamental goal for global security is to minimize the nuclear proliferation risks associated with the expansion of nuclear power. The Global Nuclear Energy Partnership was an international effort to create a distribution network in which developing countries in need of energy would receive nuclear fuel at a discounted rate, in exchange for that nation agreeing to forgo their own indigenous development of a uranium enrichment program. The France-based Eurodif/European Gaseous Diffusion Uranium Enrichment Consortium is a program that successfully implemented this concept, with Spain and other countries without enrichment facilities buying a share of the fuel produced at the French-controlled enrichment facility, but without a transfer of technology. Iran was an early participant from 1974 and remains a shareholder of Eurodif via Sofidif.
A 2009 United Nations report said that:
the revival of interest in nuclear power could result in the worldwide dissemination of uranium enrichment and spent fuel reprocessing technologies, which present obvious risks of proliferation as these technologies can produce fissile materials that are directly usable in nuclear weapons.
On the other hand, power reactors can also reduce nuclear weapon arsenals when military-grade nuclear materials are reprocessed to be used as fuel in nuclear power plants. The Megatons to Megawatts Program is considered the single most successful non-proliferation program to date. Up to 2005, the program had processed $8 billion of high enriched, weapons-grade uranium into low enriched uranium suitable as nuclear fuel for commercial fission reactors by diluting it with natural uranium. This corresponds to the elimination of 10,000 nuclear weapons. For approximately two decades, this material generated nearly 10 percent of all the electricity consumed in the United States, or about half of all U.S. nuclear electricity, with a total of around 7,000TWh of electricity produced. In total it is estimated to have cost $17 billion, a "bargain for US ratepayers", with Russia profiting $12 billion from the deal. Much needed profit for the Russian nuclear oversight industry, which after the collapse of the Soviet economy, had difficulties paying for the maintenance and security of the Russian Federation's highly enriched uranium and warheads. The Megatons to Megawatts Program was hailed as a major success by anti-nuclear weapon advocates as it has largely been the driving force behind the sharp reduction in the number of nuclear weapons worldwide since the cold war ended. However, without an increase in nuclear reactors and greater demand for fissile fuel, the cost of dismantling and down blending has dissuaded Russia from continuing their disarmament. As of 2013, Russia appears to not be interested in extending the program.
Environmental impact
Being a low-carbon energy source with relatively little land-use requirements, nuclear energy can have a positive environmental impact. It also requires a constant supply of significant amounts of water and affects the environment through mining and milling. Its largest potential negative impacts on the environment may arise from its transgenerational risks for nuclear weapons proliferation that may increase risks of their use in the future, risks for problems associated with the management of the radioactive waste such as groundwater contamination, risks for accidents and for risks for various forms of attacks on waste storage sites or reprocessing- and power-plants. However, these remain mostly only risks as historically there have only been few disasters at nuclear power plants with known relatively substantial environmental impacts.
Carbon emissions
Nuclear power is one of the leading low carbon power generation methods of producing electricity, and in terms of total life-cycle greenhouse gas emissions per unit of energy generated, has emission values comparable to or lower than renewable energy. A 2014 analysis of the carbon footprint literature by the Intergovernmental Panel on Climate Change (IPCC) reported that the embodied total life-cycle emission intensity of nuclear power has a median value of 12g eq/kWh, which is the lowest among all commercial baseload energy sources. This is contrasted with coal and natural gas at 820 and 490 g eq/kWh. As of 2021, nuclear reactors worldwide have helped avoid the emission of 72 billion tonnes of carbon dioxide since 1970, compared to coal-fired electricity generation, according to a report.
Radiation
The average dose from natural background radiation is 2.4 millisievert per year (mSv/a) globally. It varies between 1mSv/a and 13mSv/a, depending mostly on the geology of the location. According to the United Nations (UNSCEAR), regular nuclear power plant operations, including the nuclear fuel cycle, increases this amount by 0.0002mSv/a of public exposure as a global average. The average dose from operating nuclear power plants to the local populations around them is less than 0.0001mSv/a. For comparison, the average dose to those living within of a coal power plant is over three times this dose, at 0.0003mSv/a.
Chernobyl resulted in the most affected surrounding populations and male recovery personnel receiving an average initial 50 to 100mSv over a few hours to weeks, while the remaining global legacy of the worst nuclear power plant accident in average exposure is 0.002mSv/a and is continuously dropping at the decaying rate, from the initial high of 0.04mSv per person averaged over the entire populace of the Northern Hemisphere in the year of the accident in 1986.
Debate
The nuclear power debate concerns the controversy which has surrounded the deployment and use of nuclear fission reactors to generate electricity from nuclear fuel for civilian purposes.
Proponents of nuclear energy regard it as a sustainable energy source that reduces carbon emissions and increases energy security by decreasing dependence on other energy sources that are also often dependent on imports. For example, proponents note that annually, nuclear-generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. Additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed/recycled for other energy uses. M. King Hubbert, who popularized the concept of peak oil, saw oil as a resource that would run out and considered nuclear energy its replacement. Proponents also claim that the present quantity of nuclear waste is small and can be reduced through the latest technology of newer reactors and that the operational safety record of fission-electricity in terms of deaths is so far "unparalleled". Kharecha and Hansen estimated that "global nuclear power has prevented an average of 1.84 million air pollution-related deaths and 64 gigatonnes of CO2-equivalent (Gt-eq) greenhouse gas (GHG) emissions that would have resulted from fossil fuel burning" and, if continued, it could prevent up to 7 million deaths and 240Gt-eq emissions by 2050.
Proponents also bring to attention the opportunity cost of using other forms of electricity. For example, the Environmental Protection Agency estimates that coal kills 30,000 people a year, as a result of its environmental impact, while 60 people died in the Chernobyl disaster. A real world example of impact provided by proponents is the 650,000 ton increase in carbon emissions in the two months following the closure of the Vermont Yankee nuclear plant.
Opponents believe that nuclear power poses many threats to people's health and environment such as the risk of nuclear weapons proliferation, long-term safe waste management and terrorism in the future. They also contend that nuclear power plants are complex systems where many things can and have gone wrong. Costs of the Chernobyl disaster amount to ≈$68 billion as of 2019 and are increasing, the Fukushima disaster is estimated to cost taxpayers ~$187 billion, and radioactive waste management is estimated to cost the Eureopean Union nuclear operators ~$250 billion by 2050. However, in countries that already use nuclear energy, when not considering reprocessing, intermediate nuclear waste disposal costs could be relatively fixed to certain but unknown degrees "as the main part of these costs stems from the operation of the intermediate storage facility".
Critics find that one of the largest drawbacks to building new nuclear fission power plants are the large construction and operating costs when compared to alternatives of sustainable energy sources. Further costs include ongoing research and development, expensive reprocessing in cases where such is practiced and decommissioning. Proponents note that focussing on the levelized cost of energy (LCOE), however, ignores the value premium associated with 24/7 dispatchable electricity and the cost of storage and backup systems necessary to integrate variable energy sources into a reliable electrical grid. "Nuclear thus remains the dispatchable low-carbon technology with the lowest expected costs in 2025. Only large hydro reservoirs can provide a similar contribution at comparable costs but remain highly dependent on the natural endowments of individual countries."
Overall, many opponents find that nuclear energy cannot meaningfully contribute to climate change mitigation. In general, they find it to be, too dangerous, too expensive, to take too long for deployment, to be an obstacle to achieving a transition towards sustainability and carbon-neutrality, effectively being a distracting competition for resources (i.e. human, financial, time, infrastructure and expertise) for the deployment and development of alternative, sustainable, energy system technologies (such as for wind, ocean and solar – including e.g. floating solar – as well as ways to manage their intermittency other than nuclear baseload generation such as dispatchable generation, renewables-diversification, super grids, flexible energy demand and supply regulating smart grids and energy storage technologies).
Nevertheless, there is ongoing research and debate over costs of new nuclear, especially in regions where i.a. seasonal energy storage is difficult to provide and which aim to phase out fossil fuels in favor of low carbon power faster than the global average. Some find that financial transition costs for a 100% renewables-based European energy system that has completely phased out nuclear energy could be more costly by 2050 based on current technologies (i.e. not considering potential advances in e.g. green hydrogen, transmission and flexibility capacities, ways to reduce energy needs, geothermal energy and fusion energy) when the grid only extends across Europe. Arguments of economics and safety are used by both sides of the debate.
Comparison with renewable energy
Slowing global warming requires a transition to a low-carbon economy, mainly by burning far less fossil fuel. Limiting global warming to 1.5°C is technically possible if no new fossil fuel power plants are built from 2019. This has generated considerable interest and dispute in determining the best path forward to rapidly replace fossil-based fuels in the global energy mix, with intense academic debate. Sometimes the IEA says that countries without nuclear should develop it as well as their renewable power.
Several studies suggest that it might be theoretically possible to cover a majority of world energy generation with new renewable sources. The Intergovernmental Panel on Climate Change (IPCC) has said that if governments were supportive, renewable energy supply could account for close to 80% of the world's energy use by 2050. While in developed nations the economically feasible geography for new hydropower is lacking, with every geographically suitable area largely already exploited, some proponents of wind and solar energy claim these resources alone could eliminate the need for nuclear power.
Nuclear power is comparable to, and in some cases lower, than many renewable energy sources in terms of lives lost in the past per unit of electricity delivered. Depending on recycling of renewable energy technologies, nuclear reactors may produce a much smaller volume of waste, although much more toxic, expensive to manage and longer-lived. A nuclear plant also needs to be disassembled and removed and much of the disassembled nuclear plant needs to be stored as low-level nuclear waste for a few decades. The disposal and management of the wide variety of radioactive waste, of which there are over one quarter of a million tons as of 2018, can cause future damage and costs across the world for over or during hundreds of thousands of years – possibly over a million years, due to issues such as leakage, malign retrieval, vulnerability to attacks (including of reprocessing and power plants), groundwater contamination, radiation and leakage to above ground, brine leakage or bacterial corrosion. The European Commission Joint Research Centre found that as of 2021 the necessary technologies for geological disposal of nuclear waste are now available and can be deployed. Corrosion experts noted in 2020 that putting the problem of storage off any longer "isn't good for anyone". Separated plutonium and enriched uranium could be used for nuclear weapons, which – even with the current centralized control (e.g. state-level) and level of prevalence – are considered to be a difficult and substantial global risk for substantial future impacts on human health, lives, civilization and the environment.
Speed of transition and investment needed
Analysis in 2015 by professor Barry W. Brook and colleagues found that nuclear energy could displace or remove fossil fuels from the electric grid completely within 10 years. This finding was based on the historically modest and proven rate at which nuclear energy was added in France and Sweden during their building programs in the 1980s. In a similar analysis, Brook had earlier determined that 50% of all global energy, including transportation synthetic fuels etc., could be generated within approximately 30 years if the global nuclear fission build rate was identical to historical proven installation rates calculated in GW per year per unit of global GDP (GW/year/$). This is in contrast to the conceptual studies for 100% renewable energy systems, which would require an order of magnitude more costly global investment per year, which has no historical precedent. These renewable scenarios would also need far greater land devoted to onshore wind and onshore solar projects. Brook notes that the "principal limitations on nuclear fission are not technical, economic or fuel-related, but are instead linked to complex issues of societal acceptance, fiscal and political inertia, and inadequate critical evaluation of the real-world constraints facing [the other] low-carbon alternatives."
Scientific data indicates that – assuming 2021 emissions levels – humanity only has a carbon budget equivalent to 11 years of emissions left for limiting warming to 1.5°C while the construction of new nuclear reactors took a median of 7.2–10.9 years in 2018–2020, substantially longer than, alongside other measures, scaling up the deployment of wind and solar – especially for novel reactor types – as well as being more risky, often delayed and more dependent on state-support. Researchers have cautioned that novel nuclear technologies – which have been in development since decades, are less tested, have higher proliferation risks, have more new safety problems, are often far from commercialization and are more expensive – are not available in time. Critics of nuclear energy often only oppose nuclear fission energy but not nuclear fusion; however, fusion energy is unlikely to be commercially widespread before 2050.
Land use
The median land area used by US nuclear power stations per 1GW installed capacity is . To generate the same amount of electricity annually (taking into account capacity factors) from solar PV would require about , and from a wind farm about . Not included in this, is land required for the associated transmission lines, water supply, rail lines, mining and processing of nuclear fuel, and for waste disposal.
Research
Advanced fission reactor designs
Current fission reactors in operation around the world are second or third generation systems, with most of the first-generation systems having been already retired. Research into advanced generation IV reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals, including to improve economics, safety, proliferation resistance, natural resource use and the ability to consume existing nuclear waste in the production of electricity. Most of these reactors differ significantly from current operating light water reactors, and are expected to be available for commercial construction after 2030.
Hybrid fusion-fission
Hybrid nuclear power is a proposed means of generating power by the use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to delays in the realization of pure fusion. When a sustained nuclear fusion power plant is built, it has the potential to be capable of extracting all the fission energy that remains in spent fission fuel, reducing the volume of nuclear waste by orders of magnitude, and more importantly, eliminating all actinides present in the spent fuel, substances which cause security concerns.
Fusion
Nuclear fusion reactions have the potential to be safer and generate less radioactive waste than fission. These reactions appear potentially viable, though technically quite difficult and have yet to be created on a scale that could be used in a functional power plant. Fusion power has been under theoretical and experimental investigation since the 1950s. Nuclear fusion research is underway but fusion energy is not likely to be commercially widespread before 2050.
Several experimental nuclear fusion reactors and facilities exist. The largest and most ambitious international nuclear fusion project currently in progress is ITER, a large tokamak under construction in France. ITER is planned to pave the way for commercial fusion power by demonstrating self-sustained nuclear fusion reactions with positive energy gain. Construction of the ITER facility began in 2007, but the project has run into many delays and budget overruns. The facility is now not expected to begin operations until the year 2027 – 11 years after initially anticipated. A follow on commercial nuclear fusion power station, DEMO, has been proposed. There are also suggestions for a power plant based upon a different fusion approach, that of an inertial fusion power plant.
Fusion-powered electricity generation was initially believed to be readily achievable, as fission-electric power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections being extended by several decades. In 2020, more than 80 years after the first attempts, commercialization of fusion power production was thought to be unlikely before 2050.
To enhance and accelerate the development of fusion energy, the United States Department of Energy (DOE) granted $46 million to eight firms, including Commonwealth Fusion Systems and Tokamak Energy Inc, in 2023. This ambitious initiative aims to introduce pilot-scale fusion within a decade.
See also
Atomic battery
Nuclear power by country
Nuclear weapons debate
Pro-nuclear movement
Thorium-based nuclear power
Uranium mining debate
World energy supply and consumption
References
Further reading
AEC Atom Information Booklets, Both series, "Understanding the Atom" and "The World of the Atom" . A total of 75 booklets published by the U.S. Atomic Energy Commission (AEC) in the 1960s and 1970s, Authored by scientists and taken together, the booklets comprise the history of nuclear science and its applications at the time.
Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy , Nature Energy, Vol 1, 11 January 2016.
Brown, Kate (2013). Plutopia: Nuclear Families, Atomic Cities, and the Great Soviet and American Plutonium Disasters, Oxford University Press.
Clarfield, Gerald H. and Wiecek, William M. (1984). Nuclear America: Military and Civilian Nuclear Power in the United States 1940–1980, Harper & Row.
Cooke, Stephanie (2009). In Mortal Hands: A Cautionary History of the Nuclear Age, Black Inc.
Elliott, David (2007). Nuclear or Not? Does Nuclear Power Have a Place in a Sustainable Energy Future?, Palgrave.
Ferguson, Charles D., (2007). Nuclear Energy: Balancing Benefits and Risks Council on Foreign Relations.
Garwin, Richard L. and Charpak, Georges (2001) Megawatts and Megatons A Turning Point in the Nuclear Age?, Knopf.
Herbst, Alan M. and George W. Hopley (2007). Nuclear Energy Now: Why the Time has come for the World's Most Misunderstood Energy Source, Wiley.
Oreskes, Naomi, "Breaking the Techno-Promise: We do not have enough time for nuclear power to save us from the climate crisis", Scientific American, vol. 326, no. 2 (February 2022), p. 74.
Schneider, Mycle, Steve Thomas, Antony Froggatt, Doug Koplow (2016). The World Nuclear Industry Status Report: World Nuclear Industry Status as of 1 January 2016.
Walker, J. Samuel (1992). Containing the Atom: Nuclear Regulation in a Changing Environment, 1993–1971, Berkeley, California: University of California Press.
Weart, Spencer R. The Rise of Nuclear Fear. Cambridge, Massachusetts: Harvard University Press, 2012. .
External links
U.S. Energy Information Administration
Nuclear Fuel Cycle Cost Calculator
Energy conversion
Power
Power station technology
Articles containing video clips
Global issues | Nuclear power | [
"Physics"
] | 13,652 | [
"Physical quantities",
"Nuclear power",
"Nuclear technology",
"Power (physics)",
"Nuclear physics"
] |
22,303 | https://en.wikipedia.org/wiki/Oxygen | Oxygen is a chemical element with the symbol O and atomic number 8. It is a member of the chalcogen group in the periodic table, a highly reactive nonmetal, and a potent oxidizing agent that readily forms oxides with most elements as well as with other compounds. Oxygen is the most abundant element in Earth's crust, and the third-most abundant element in the universe after hydrogen and helium.
At standard temperature and pressure, two oxygen atoms will bind covalently to form dioxygen, a colorless and odorless diatomic gas with the chemical formula . Dioxygen gas currently constitutes 20.95% molar fraction of the Earth's atmosphere, though this has changed considerably over long periods of time in Earth's history. Oxygen makes up almost half of the Earth's crust in the form of various oxides such as water, carbon dioxide, iron oxides and silicates.
All eukaryotic organisms, including plants, animals, fungi, algae and most protists, need oxygen for cellular respiration, which extracts chemical energy by the reaction of oxygen with organic molecules derived from food and releases carbon dioxide as a waste product. In aquatic animals, dissolved oxygen in water is absorbed by specialized respiratory organs called gills, through the skin or via the gut; in terrestrial animals such as tetrapods, oxygen in air is actively taken into the body via specialized organs known as lungs, where gas exchange takes place to diffuse oxygen into the blood and carbon dioxide out, and the body's circulatory system then transports the oxygen to other tissues where cellular respiration takes place. However in insects, the most successful and biodiverse terrestrial clade, oxygen is directly conducted to the internal tissues via a deep network of airways.
Many major classes of organic molecules in living organisms contain oxygen atoms, such as proteins, nucleic acids, carbohydrates and fats, as do the major constituent inorganic compounds of animal shells, teeth, and bone. Most of the mass of living organisms is oxygen as a component of water, the major constituent of lifeforms. Oxygen in Earth's atmosphere is produced by biotic photosynthesis, in which photon energy in sunlight is captured by chlorophyll to split water molecules and then react with carbon dioxide to produce carbohydrates and oxygen is released as a byproduct. Oxygen is too chemically reactive to remain a free element in air without being continuously replenished by the photosynthetic activities of autotrophs such as cyanobacteria, chloroplast-bearing algae and plants. A much rarer triatomic allotrope of oxygen, ozone (), strongly absorbs the UVB and UVC wavelengths and forms a protective ozone layer at the lower stratosphere, which shields the biosphere from ionizing ultraviolet radiation. However, ozone present at the surface is a corrosive byproduct of smog and thus an air pollutant.
Oxygen was isolated by Michael Sendivogius before 1604, but it is commonly believed that the element was discovered independently by Carl Wilhelm Scheele, in Uppsala, in 1773 or earlier, and Joseph Priestley in Wiltshire, in 1774. Priority is often given for Priestley because his work was published first. Priestley, however, called oxygen "dephlogisticated air", and did not recognize it as a chemical element. The name oxygen was coined in 1777 by Antoine Lavoisier, who first recognized oxygen as a chemical element and correctly characterized the role it plays in combustion.
Common industrial uses of oxygen include production of steel, plastics and textiles, brazing, welding and cutting of steels and other metals, rocket propellant, oxygen therapy, and life support systems in aircraft, submarines, spaceflight and diving.
History of study
Early experiments
One of the first known experiments on the relationship between combustion and air was conducted by the 2nd century BCE Greek writer on mechanics, Philo of Byzantium. In his work Pneumatica, Philo observed that inverting a vessel over a burning candle and surrounding the vessel's neck with water resulted in some water rising into the neck. Philo incorrectly surmised that parts of the air in the vessel were converted into the classical element fire and thus were able to escape through pores in the glass. Many centuries later Leonardo da Vinci built on Philo's work by observing that a portion of air is consumed during combustion and respiration.
In the late 17th century, Robert Boyle proved that air is necessary for combustion. English chemist John Mayow (1641–1679) refined this work by showing that fire requires only a part of air that he called spiritus nitroaereus. In one experiment, he found that placing either a mouse or a lit candle in a closed container over water caused the water to rise and replace one-fourteenth of the air's volume before extinguishing the subjects. From this, he surmised that nitroaereus is consumed in both respiration and combustion.
Mayow observed that antimony increased in weight when heated, and inferred that the nitroaereus must have combined with it. He also thought that the lungs separate nitroaereus from air and pass it into the blood and that animal heat and muscle movement result from the reaction of nitroaereus with certain substances in the body. Accounts of these and other experiments and ideas were published in 1668 in his work Tractatus duo in the tract "De respiratione".
Phlogiston theory
Robert Hooke, Ole Borch, Mikhail Lomonosov, and Pierre Bayen all produced oxygen in experiments in the 17th and the 18th century but none of them recognized it as a chemical element. This may have been in part due to the prevalence of the philosophy of combustion and corrosion called the phlogiston theory, which was then the favored explanation of those processes.
Established in 1667 by the German alchemist J. J. Becher, and modified by the chemist Georg Ernst Stahl by 1731, phlogiston theory stated that all combustible materials were made of two parts. One part, called phlogiston, was given off when the substance containing it was burned, while the dephlogisticated part was thought to be its true form, or calx.
Highly combustible materials that leave little residue, such as wood or coal, were thought to be made mostly of phlogiston; non-combustible substances that corrode, such as iron, contained very little. Air did not play a role in phlogiston theory, nor were any initial quantitative experiments conducted to test the idea; instead, it was based on observations of what happens when something burns, that most common objects appear to become lighter and seem to lose something in the process.
Discovery
Polish alchemist, philosopher, and physician Michael Sendivogius (Michał Sędziwój) in his work De Lapide Philosophorum Tractatus duodecim e naturae fonte et manuali experientia depromti ["Twelve Treatises on the Philosopher's Stone drawn from the source of nature and manual experience"] (1604) described a substance contained in air, referring to it as 'cibus vitae' (food of life,) and according to Polish historian Roman Bugaj, this substance is identical with oxygen. Sendivogius, during his experiments performed between 1598 and 1604, properly recognized that the substance is equivalent to the gaseous byproduct released by the thermal decomposition of potassium nitrate. In Bugaj's view, the isolation of oxygen and the proper association of the substance to that part of air which is required for life, provides sufficient evidence for the discovery of oxygen by Sendivogius. This discovery of Sendivogius was however frequently denied by the generations of scientists and chemists which succeeded him.
It is also commonly claimed that oxygen was first discovered by Swedish pharmacist Carl Wilhelm Scheele. He had produced oxygen gas by heating mercuric oxide (HgO) and various nitrates in 1771–72. Scheele called the gas "fire air" because it was then the only known agent to support combustion. He wrote an account of this discovery in a manuscript titled Treatise on Air and Fire, which he sent to his publisher in 1775. That document was published in 1777.
In the meantime, on August 1, 1774, an experiment conducted by the British clergyman Joseph Priestley focused sunlight on mercuric oxide contained in a glass tube, which liberated a gas he named "dephlogisticated air". He noted that candles burned brighter in the gas and that a mouse was more active and lived longer while breathing it. After breathing the gas himself, Priestley wrote: "The feeling of it to my lungs was not sensibly different from that of common air, but I fancied that my breast felt peculiarly light and easy for some time afterwards." Priestley published his findings in 1775 in a paper titled "An Account of Further Discoveries in Air", which was included in the second volume of his book titled Experiments and Observations on Different Kinds of Air. Because he published his findings first, Priestley is usually given priority in the discovery.
The French chemist Antoine Laurent Lavoisier later claimed to have discovered the new substance independently. Priestley visited Lavoisier in October 1774 and told him about his experiment and how he liberated the new gas. Scheele had also dispatched a letter to Lavoisier on September 30, 1774, which described his discovery of the previously unknown substance, but Lavoisier never acknowledged receiving it (a copy of the letter was found in Scheele's belongings after his death).
Lavoisier's contribution
Lavoisier conducted the first adequate quantitative experiments on oxidation and gave the first correct explanation of how combustion works. He used these and similar experiments, all started in 1774, to discredit the phlogiston theory and to prove that the substance discovered by Priestley and Scheele was a chemical element.
In one experiment, Lavoisier observed that there was no overall increase in weight when tin and air were heated in a closed container. He noted that air rushed in when he opened the container, which indicated that part of the trapped air had been consumed. He also noted that the tin had increased in weight and that increase was the same as the weight of the air that rushed back in. This and other experiments on combustion were documented in his book Sur la combustion en général, which was published in 1777. In that work, he proved that air is a mixture of two gases; 'vital air', which is essential to combustion and respiration, and azote (Gk. "lifeless"), which did not support either. Azote later became nitrogen in English, although it has kept the earlier name in French and several other European languages.
Etymology
Lavoisier renamed 'vital air' to oxygène in 1777 from the Greek roots (oxys) (acid, literally 'sharp', from the taste of acids) and -γενής (-genēs) (producer, literally begetter), because he mistakenly believed that oxygen was a constituent of all acids. Chemists (such as Sir Humphry Davy in 1812) eventually determined that Lavoisier was wrong in this regard (e.g. Hydrogen chloride (HCl) is a strong acid that doesn't contain oxygen), but by then the name was too well established.
Oxygen entered the English language despite opposition by English scientists and the fact that the Englishman Priestley had first isolated the gas and written about it. This is partly due to a poem praising the gas titled "Oxygen" in the popular book The Botanic Garden (1791) by Erasmus Darwin, grandfather of Charles Darwin.
Later history
John Dalton's original atomic hypothesis presumed that all elements were monatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed that water's formula was HO, leading to the conclusion that the atomic mass of oxygen was 8 times that of hydrogen, instead of the modern value of about 16. In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen; and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the diatomic elemental molecules in those gases.
The first commercial method of producing oxygen was chemical, the so-called Brin process involving a reversible reaction of barium oxide. It was invented in 1852 and commercialized in 1884, but was displaced by newer methods in early 20th century.
By the late 19th century scientists realized that air could be liquefied and its components isolated by compressing and cooling it. Using a cascade method, Swiss chemist and physicist Raoul Pierre Pictet evaporated liquid sulfur dioxide in order to liquefy carbon dioxide, which in turn was evaporated to cool oxygen gas enough to liquefy it. He sent a telegram on December 22, 1877, to the French Academy of Sciences in Paris announcing his discovery of liquid oxygen. Just two days later, French physicist Louis Paul Cailletet announced his own method of liquefying molecular oxygen. Only a few drops of the liquid were produced in each case and no meaningful analysis could be conducted. Oxygen was liquefied in a stable state for the first time on March 29, 1883, by Polish scientists from Jagiellonian University, Zygmunt Wróblewski and Karol Olszewski.
In 1891 Scottish chemist James Dewar was able to produce enough liquid oxygen for study. The first commercially viable process for producing liquid oxygen was independently developed in 1895 by German engineer Carl von Linde and British engineer William Hampson. Both men lowered the temperature of air until it liquefied and then distilled the component gases by boiling them off one at a time and capturing them separately. Later, in 1901, oxyacetylene welding was demonstrated for the first time by burning a mixture of acetylene and compressed . This method of welding and cutting metal later became common.
In 1923, the American scientist Robert H. Goddard became the first person to develop a rocket engine that burned liquid fuel; the engine used gasoline for fuel and liquid oxygen as the oxidizer. Goddard successfully flew a small liquid-fueled rocket 56 m at 97 km/h on March 16, 1926, in Auburn, Massachusetts, US.
In academic laboratories, oxygen can be prepared by heating together potassium chlorate mixed with a small proportion of manganese dioxide.
Oxygen levels in the atmosphere are trending slightly downward globally, possibly because of fossil-fuel burning.
Characteristics
Properties and molecular structure
At standard temperature and pressure, oxygen is a colorless, odorless, and tasteless gas with the molecular formula , referred to as dioxygen.
As dioxygen, two oxygen atoms are chemically bound to each other. The bond can be variously described based on level of theory, but is reasonably and simply described as a covalent double bond that results from the filling of molecular orbitals formed from the atomic orbitals of the individual oxygen atoms, the filling of which results in a bond order of two. More specifically, the double bond is the result of sequential, low-to-high energy, or Aufbau, filling of orbitals, and the resulting cancellation of contributions from the 2s electrons, after sequential filling of the low σ and σ* orbitals; σ overlap of the two atomic 2p orbitals that lie along the O–O molecular axis and π overlap of two pairs of atomic 2p orbitals perpendicular to the O–O molecular axis, and then cancellation of contributions from the remaining two 2p electrons after their partial filling of the π* orbitals.
This combination of cancellations and σ and π overlaps results in dioxygen's double-bond character and reactivity, and a triplet electronic ground state. An electron configuration with two unpaired electrons, as is found in dioxygen orbitals (see the filled π* orbitals in the diagram) that are of equal energy—i.e., degenerate—is a configuration termed a spin triplet state. Hence, the ground state of the molecule is referred to as triplet oxygen. The highest-energy, partially filled orbitals are antibonding, and so their filling weakens the bond order from three to two. Because of its unpaired electrons, triplet oxygen reacts only slowly with most organic molecules, which have paired electron spins; this prevents spontaneous combustion.
In the triplet form, molecules are paramagnetic. That is, they impart magnetic character to oxygen when it is in the presence of a magnetic field, because of the spin magnetic moments of the unpaired electrons in the molecule, and the negative exchange energy between neighboring molecules. Liquid oxygen is so magnetic that, in laboratory demonstrations, a bridge of liquid oxygen may be supported against its own weight between the poles of a powerful magnet.
Singlet oxygen is a name given to several higher-energy species of molecular in which all the electron spins are paired. It is much more reactive with common organic molecules than is normal (triplet) molecular oxygen. In nature, singlet oxygen is commonly formed from water during photosynthesis, using the energy of sunlight. It is also produced in the troposphere by the photolysis of ozone by light of short wavelength and by the immune system as a source of active oxygen. Carotenoids in photosynthetic organisms (and possibly animals) play a major role in absorbing energy from singlet oxygen and converting it to the unexcited ground state before it can cause harm to tissues.
Allotropes
The common allotrope of elemental oxygen on Earth is called dioxygen, , the major part of the Earth's atmospheric oxygen (see Occurrence). O2 has a bond length of 121 pm and a bond energy of 498 kJ/mol. O2 is used by complex forms of life, such as animals, in cellular respiration. Other aspects of are covered in the remainder of this article.
Trioxygen () is usually known as ozone and is a very reactive allotrope of oxygen that is damaging to lung tissue. Ozone is produced in the upper atmosphere when combines with atomic oxygen made by the splitting of by ultraviolet (UV) radiation. Since ozone absorbs strongly in the UV region of the spectrum, the ozone layer of the upper atmosphere functions as a protective radiation shield for the planet. Near the Earth's surface, it is a pollutant formed as a by-product of automobile exhaust. At low earth orbit altitudes, sufficient atomic oxygen is present to cause corrosion of spacecraft.
The metastable molecule tetraoxygen () was discovered in 2001, and was assumed to exist in one of the six phases of solid oxygen. It was proven in 2006 that this phase, created by pressurizing to 20 GPa, is in fact a rhombohedral cluster. This cluster has the potential to be a much more powerful oxidizer than either or and may therefore be used in rocket fuel. A metallic phase was discovered in 1990 when solid oxygen is subjected to a pressure of above 96 GPa and it was shown in 1998 that at very low temperatures, this phase becomes superconducting.
Physical properties
Oxygen dissolves more readily in water than nitrogen, and in freshwater more readily than in seawater. Water in equilibrium with air contains approximately 1 molecule of dissolved for every 2 molecules of (1:2), compared with an atmospheric ratio of approximately 1:4. The solubility of oxygen in water is temperature-dependent, and about twice as much () dissolves at 0 °C than at 20 °C (). At 25 °C and of air, freshwater can dissolve about 6.04 milliliters (mL) of oxygen per liter, and seawater contains about 4.95 mL per liter. At 5 °C the solubility increases to 9.0 mL (50% more than at 25 °C) per liter for freshwater and 7.2 mL (45% more) per liter for sea water.
Oxygen condenses at 90.20 K (−182.95 °C, −297.31 °F) and freezes at 54.36 K (−218.79 °C, −361.82 °F). Both liquid and solid are clear substances with a light sky-blue color caused by absorption in the red (in contrast with the blue color of the sky, which is due to Rayleigh scattering of blue light). High-purity liquid is usually obtained by the fractional distillation of liquefied air. Liquid oxygen may also be condensed from air using liquid nitrogen as a coolant.
Liquid oxygen is a highly reactive substance and must be segregated from combustible materials.
The spectroscopy of molecular oxygen is associated with the atmospheric processes of aurora and airglow. The absorption in the Herzberg continuum and Schumann–Runge bands in the ultraviolet produces atomic oxygen that is important in the chemistry of the middle atmosphere. Excited-state singlet molecular oxygen is responsible for red chemiluminescence in solution.
Table of thermal and physical properties of oxygen (O2) at atmospheric pressure:
Isotopes and stellar origin
Naturally occurring oxygen is composed of three stable isotopes, 16O, 17O, and 18O, with 16O being the most abundant (99.762% natural abundance).
Most 16O is synthesized at the end of the helium fusion process in massive stars but some is made in the neon burning process. 17O is primarily made by the burning of hydrogen into helium during the CNO cycle, making it a common isotope in the hydrogen burning zones of stars. Most 18O is produced when 14N (made abundant from CNO burning) captures a 4He nucleus, making 18O common in the helium-rich zones of evolved, massive stars.
Fifteen radioisotopes have been characterized, ranging from 11O to 28O. The most stable are 15O with a half-life of 122.24 seconds and 14O with a half-life of 70.606 seconds. All of the remaining radioactive isotopes have half-lives that are less than 27 seconds and the majority of these have half-lives that are less than 83 milliseconds. The most common decay mode of the isotopes lighter than 16O is β+ decay to yield nitrogen, and the most common mode for the isotopes heavier than 18O is beta decay to yield fluorine.
Occurrence
Oxygen is the most abundant chemical element by mass in the Earth's biosphere, air, sea and land. Oxygen is the third most abundant chemical element in the universe, after hydrogen and helium. About 0.9% of the Sun's mass is oxygen. Oxygen constitutes 49.2% of the Earth's crust by mass as part of oxide compounds such as silicon dioxide and is the most abundant element by mass in the Earth's crust. It is also the major component of the world's oceans (88.8% by mass). Oxygen gas is the second most common component of the Earth's atmosphere, taking up 20.8% of its volume and 23.1% of its mass (some 1015 tonnes). Earth is unusual among the planets of the Solar System in having such a high concentration of oxygen gas in its atmosphere: Mars (with 0.1% by volume) and Venus have much less. The surrounding those planets is produced solely by the action of ultraviolet radiation on oxygen-containing molecules such as carbon dioxide.
The unusually high concentration of oxygen gas on Earth is the result of the oxygen cycle. This biogeochemical cycle describes the movement of oxygen within and between its three main reservoirs on Earth: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is photosynthesis, which is responsible for modern Earth's atmosphere. Photosynthesis releases oxygen into the atmosphere, while respiration, decay, and combustion remove it from the atmosphere. In the present equilibrium, production and consumption occur at the same rate.
Free oxygen also occurs in solution in the world's water bodies. The increased solubility of at lower temperatures (see Physical properties) has important implications for ocean life, as polar oceans support a much higher density of life due to their higher oxygen content. Water polluted with plant nutrients such as nitrates or phosphates may stimulate growth of algae by a process called eutrophication and the decay of these organisms and other biomaterials may reduce the content in eutrophic water bodies. Scientists assess this aspect of water quality by measuring the water's biochemical oxygen demand, or the amount of needed to restore it to a normal concentration.
Analysis
Paleoclimatologists measure the ratio of oxygen-18 and oxygen-16 in the shells and skeletons of marine organisms to determine the climate millions of years ago (see oxygen isotope ratio cycle). Seawater molecules that contain the lighter isotope, oxygen-16, evaporate at a slightly faster rate than water molecules containing the 12% heavier oxygen-18, and this disparity increases at lower temperatures. During periods of lower global temperatures, snow and rain from that evaporated water tends to be higher in oxygen-16, and the seawater left behind tends to be higher in oxygen-18. Marine organisms then incorporate more oxygen-18 into their skeletons and shells than they would in a warmer climate. Paleoclimatologists also directly measure this ratio in the water molecules of ice core samples as old as hundreds of thousands of years.
Planetary geologists have measured the relative quantities of oxygen isotopes in samples from the Earth, the Moon, Mars, and meteorites, but were long unable to obtain reference values for the isotope ratios in the Sun, believed to be the same as those of the primordial solar nebula. Analysis of a silicon wafer exposed to the solar wind in space and returned by the crashed Genesis spacecraft has shown that the Sun has a higher proportion of oxygen-16 than does the Earth. The measurement implies that an unknown process depleted oxygen-16 from the Sun's disk of protoplanetary material prior to the coalescence of dust grains that formed the Earth.
Oxygen presents two spectrophotometric absorption bands peaking at the wavelengths 687 and 760 nm. Some remote sensing scientists have proposed using the measurement of the radiance coming from vegetation canopies in those bands to characterize plant health status from a satellite platform. This approach exploits the fact that in those bands it is possible to discriminate the vegetation's reflectance from its fluorescence, which is much weaker. The measurement is technically difficult owing to the low signal-to-noise ratio and the physical structure of vegetation; but it has been proposed as a possible method of monitoring the carbon cycle from satellites on a global scale.
Biological production and role of O2
Photosynthesis and respiration
In nature, free oxygen is produced by the light-driven splitting of water during oxygenic photosynthesis. According to some estimates, green algae and cyanobacteria in marine environments provide about 70% of the free oxygen produced on Earth, and the rest is produced by terrestrial plants. Other estimates of the oceanic contribution to atmospheric oxygen are higher, while some estimates are lower, suggesting oceans produce ~45% of Earth's atmospheric oxygen each year.
A simplified overall formula for photosynthesis is
6 + 6 + photons → + 6
or simply
carbon dioxide + water + sunlight → glucose + dioxygen
Photolytic oxygen evolution occurs in the thylakoid membranes of photosynthetic organisms and requires the energy of four photons. Many steps are involved, but the result is the formation of a proton gradient across the thylakoid membrane, which is used to synthesize adenosine triphosphate (ATP) via photophosphorylation. The remaining (after production of the water molecule) is released into the atmosphere.
Oxygen is used in mitochondria in the generation of ATP during oxidative phosphorylation. The reaction for aerobic respiration is essentially the reverse of photosynthesis and is simplified as
+ 6 → 6 + 6 + 2880 kJ/mol
In vertebrates, diffuses through membranes in the lungs and into red blood cells. Hemoglobin binds , changing color from bluish red to bright red ( is released from another part of hemoglobin through the Bohr effect). Other animals use hemocyanin (molluscs and some arthropods) or hemerythrin (spiders and lobsters). A liter of blood can dissolve 200 cm3 of .
Until the discovery of anaerobic metazoa, oxygen was thought to be a requirement for all complex life.
Reactive oxygen species, such as superoxide ion () and hydrogen peroxide (), are reactive by-products of oxygen use in organisms. Parts of the immune system of higher organisms create peroxide, superoxide, and singlet oxygen to destroy invading microbes. Reactive oxygen species also play an important role in the hypersensitive response of plants against pathogen attack. Oxygen is damaging to obligately anaerobic organisms, which were the dominant form of early life on Earth until began to accumulate in the atmosphere about 2.5 billion years ago during the Great Oxygenation Event, about a billion years after the first appearance of these organisms.
An adult human at rest inhales 1.8 to 2.4 grams of oxygen per minute. This amounts to more than 6 billion tonnes of oxygen inhaled by humanity per year.
Living organisms
The free oxygen partial pressure in the body of a living vertebrate organism is highest in the respiratory system, and decreases along any arterial system, peripheral tissues, and venous system, respectively. Partial pressure is the pressure that oxygen would have if it alone occupied the volume.
Build-up in the atmosphere
Free oxygen gas was almost nonexistent in Earth's atmosphere before photosynthetic archaea and bacteria evolved, probably about 3.5 billion years ago. Free oxygen first appeared in significant quantities during the Paleoproterozoic era (between 3.0 and 2.3 billion years ago). Even if there was much dissolved iron in the oceans when oxygenic photosynthesis was getting more common, it appears the banded iron formations were created by anoxyenic or micro-aerophilic iron-oxidizing bacteria which dominated the deeper areas of the photic zone, while oxygen-producing cyanobacteria covered the shallows. Free oxygen began to outgas from the oceans 3–2.7 billion years ago, reaching 10% of its present level around 1.7 billion years ago.
The presence of large amounts of dissolved and free oxygen in the oceans and atmosphere may have driven most of the extant anaerobic organisms to extinction during the Great Oxygenation Event (oxygen catastrophe) about 2.4 billion years ago. Cellular respiration using enables aerobic organisms to produce much more ATP than anaerobic organisms. Cellular respiration of occurs in all eukaryotes, including all complex multicellular organisms such as plants and animals.
Since the beginning of the Cambrian period 540 million years ago, atmospheric levels have fluctuated between 15% and 30% by volume. Towards the end of the Carboniferous period (about 300 million years ago) atmospheric levels reached a maximum of 35% by volume, which may have contributed to the large size of insects and amphibians at this time.
Variations in atmospheric oxygen concentration have shaped past climates. When oxygen declined, atmospheric density dropped, which in turn increased surface evaporation, causing precipitation increases and warmer temperatures.
At the current rate of photosynthesis it would take about 2,000 years to regenerate the entire in the present atmosphere.
It is estimated that oxygen on Earth will last for about one billion years.
Extraterrestrial free oxygen
In the field of astrobiology and in the search for extraterrestrial life oxygen is a strong biosignature. That said it might not be a definite biosignature, being possibly produced abiotically on celestial bodies with processes and conditions (such as a peculiar hydrosphere) which allow free oxygen, like with Europa's and Ganymede's thin oxygen atmospheres.
Industrial production
One hundred million tonnes of are extracted from air for industrial uses annually by two primary methods. The most common method is fractional distillation of liquefied air, with distilling as a vapor while is left as a liquid.
The other primary method of producing is passing a stream of clean, dry air through one bed of a pair of identical zeolite molecular sieves, which absorbs the nitrogen and delivers a gas stream that is 90% to 93% . Simultaneously, nitrogen gas is released from the other nitrogen-saturated zeolite bed, by reducing the chamber operating pressure and diverting part of the oxygen gas from the producer bed through it, in the reverse direction of flow. After a set cycle time the operation of the two beds is interchanged, thereby allowing for a continuous supply of gaseous oxygen to be pumped through a pipeline. This is known as pressure swing adsorption. Oxygen gas is increasingly obtained by these non-cryogenic technologies (see also the related vacuum swing adsorption).
Oxygen gas can also be produced through electrolysis of water into molecular oxygen and hydrogen. DC electricity must be used: if AC is used, the gases in each limb consist of hydrogen and oxygen in the explosive ratio 2:1. A similar method is the electrocatalytic evolution from oxides and oxoacids. Chemical catalysts can be used as well, such as in chemical oxygen generators or oxygen candles that are used as part of the life-support equipment on submarines, and are still part of standard equipment on commercial airliners in case of depressurization emergencies. Another air separation method is forcing air to dissolve through ceramic membranes based on zirconium dioxide by either high pressure or an electric current, to produce nearly pure gas.
Storage
Oxygen storage methods include high-pressure oxygen tanks, cryogenics and chemical compounds. For reasons of economy, oxygen is often transported in bulk as a liquid in specially insulated tankers, since one liter of liquefied oxygen is equivalent to 840 liters of gaseous oxygen at atmospheric pressure and . Such tankers are used to refill bulk liquid-oxygen storage containers, which stand outside hospitals and other institutions that need large volumes of pure oxygen gas. Liquid oxygen is passed through heat exchangers, which convert the cryogenic liquid into gas before it enters the building. Oxygen is also stored and shipped in smaller cylinders containing the compressed gas; a form that is useful in certain portable medical applications and oxy-fuel welding and cutting.
Applications
Medical
Uptake of from the air is the essential purpose of respiration, so oxygen supplementation is used in medicine. Treatment not only increases oxygen levels in the patient's blood, but has the secondary effect of decreasing resistance to blood flow in many types of diseased lungs, easing work load on the heart. Oxygen therapy is used to treat emphysema, pneumonia, some heart disorders (congestive heart failure), some disorders that cause increased pulmonary artery pressure, and any disease that impairs the body's ability to take up and use gaseous oxygen.
Treatments are flexible enough to be used in hospitals, the patient's home, or increasingly by portable devices. Oxygen tents were once commonly used in oxygen supplementation, but have since been replaced mostly by the use of oxygen masks or nasal cannulas.
Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the 'bends') are sometimes addressed with this therapy. Increased concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in the blood. Increasing the pressure of as soon as possible helps to redissolve the bubbles back into the blood so that these excess gasses can be exhaled naturally through the lungs. Normobaric oxygen administration at the highest available concentration is frequently used as first aid for any diving injury that may involve inert gas bubble formation in the tissues. There is epidemiological support for its use from a statistical study of cases recorded in a long term database.
Life support and recreational use
An application of as a low-pressure breathing gas is in modern space suits, which surround their occupant's body with the breathing gas. These devices use nearly pure oxygen at about one-third normal pressure, resulting in a normal blood partial pressure of . This trade-off of higher oxygen concentration for lower pressure is needed to maintain suit flexibility.
Scuba and surface-supplied underwater divers and submarines also rely on artificially delivered . Submarines, submersibles and atmospheric diving suits usually operate at normal atmospheric pressure. Breathing air is scrubbed of carbon dioxide by chemical extraction and oxygen is replaced to maintain a constant partial pressure. Ambient pressure divers breathe air or gas mixtures with an oxygen fraction suited to the operating depth. Pure or nearly pure use in diving at pressures higher than atmospheric is usually limited to rebreathers, or decompression at relatively shallow depths (~6 meters depth, or less), or medical treatment in recompression chambers at pressures up to 2.8 bar, where acute oxygen toxicity can be managed without the risk of drowning. Deeper diving requires significant dilution of with other gases, such as nitrogen or helium, to prevent oxygen toxicity.
People who climb mountains or fly in non-pressurized fixed-wing aircraft sometimes have supplemental supplies. Pressurized commercial airplanes have an emergency supply of automatically supplied to the passengers in case of cabin depressurization. Sudden cabin pressure loss activates chemical oxygen generators above each seat, causing oxygen masks to drop. Pulling on the masks "to start the flow of oxygen" as cabin safety instructions dictate, forces iron filings into the sodium chlorate inside the canister. A steady stream of oxygen gas is then produced by the exothermic reaction.
Oxygen, as a mild euphoric, has a history of recreational use in oxygen bars and in sports. Oxygen bars are establishments found in the United States since the late 1990s that offer higher than normal exposure for a minimal fee. Professional athletes, especially in American football, sometimes go off-field between plays to don oxygen masks to boost performance. The pharmacological effect is doubted; a placebo effect is a more likely explanation. Available studies support a performance boost from oxygen enriched mixtures only if it is inhaled during aerobic exercise.
Other recreational uses that do not involve breathing include pyrotechnic applications, such as George Goble's five-second ignition of barbecue grills.
Industrial
Smelting of iron ore into steel consumes 55% of commercially produced oxygen. In this process, is injected through a high-pressure lance into molten iron, which removes sulfur impurities and excess carbon as the respective oxides, and . The reactions are exothermic, so the temperature increases to 1,700 °C.
Another 25% of commercially produced oxygen is used by the chemical industry. Ethylene is reacted with to create ethylene oxide, which, in turn, is converted into ethylene glycol; the primary feeder material used to manufacture a host of products, including antifreeze and polyester polymers (the precursors of many plastics and fabrics).
Most of the remaining 20% of commercially produced oxygen is used in medical applications, metal cutting and welding, as an oxidizer in rocket fuel, and in water treatment. Oxygen is used in oxyacetylene welding, burning acetylene with to produce a very hot flame. In this process, metal up to thick is first heated with a small oxy-acetylene flame and then quickly cut by a large stream of .
Compounds
The oxidation state of oxygen is −2 in almost all known compounds of oxygen. The oxidation state −1 is found in a few compounds such as peroxides. Compounds containing oxygen in other oxidation states are very uncommon: −1/2 (superoxides), −1/3 (ozonides), 0 (elemental, hypofluorous acid), +1/2 (dioxygenyl), +1 (dioxygen difluoride), and +2 (oxygen difluoride).
Oxides and other inorganic compounds
Water () is an oxide of hydrogen and the most familiar oxygen compound. Hydrogen atoms are covalently bonded to oxygen in a water molecule but also have an additional attraction (about 23.3 kJ/mol per hydrogen atom) to an adjacent oxygen atom in a separate molecule. These hydrogen bonds between water molecules hold them approximately 15% closer than what would be expected in a simple liquid with just van der Waals forces.
Due to its electronegativity, oxygen forms chemical bonds with almost all other elements to give corresponding oxides. The surface of most metals, such as aluminium and titanium, are oxidized in the presence of air and become coated with a thin film of oxide that passivates the metal and slows further corrosion. Many oxides of the transition metals are non-stoichiometric compounds, with slightly less metal than the chemical formula would show. For example, the mineral FeO (wüstite) is written as , where x is usually around 0.05.
Oxygen is present in the atmosphere in trace quantities in the form of carbon dioxide (). The Earth's crustal rock is composed in large part of oxides of silicon (silica , as found in granite and quartz), aluminium (aluminium oxide , in bauxite and corundum), iron (iron(III) oxide , in hematite and rust), and calcium carbonate (in limestone). The rest of the Earth's crust is also made of oxygen compounds, in particular various complex silicates (in silicate minerals). The Earth's mantle, of much larger mass than the crust, is largely composed of silicates of magnesium and iron.
Water-soluble silicates in the form of , , and are used as detergents and adhesives.
Oxygen also acts as a ligand for transition metals, forming transition metal dioxygen complexes, which feature metal–. This class of compounds includes the heme proteins hemoglobin and myoglobin. An exotic and unusual reaction occurs with , which oxidizes oxygen to give O2+PtF6−, dioxygenyl hexafluoroplatinate.
Organic compounds
Among the most important classes of organic compounds that contain oxygen are (where "R" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone () and phenol () are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms. The element is similarly found in almost all biomolecules that are important to (or generated by) life.
Oxygen reacts spontaneously with many organic compounds at or below room temperature in a process called autoxidation. Most of the organic compounds that contain oxygen are not made by direct action of . Organic compounds important in industry and commerce that are made by direct oxidation of a precursor include ethylene oxide and peracetic acid.
Safety and precautions
The NFPA 704 standard rates compressed oxygen gas as nonhazardous to health, nonflammable and nonreactive, but an oxidizer. Refrigerated liquid oxygen (LOX) is given a health hazard rating of 3 (for increased risk of hyperoxia from condensed vapors, and for hazards common to cryogenic liquids such as frostbite), and all other ratings are the same as the compressed gas form.
Toxicity
Oxygen gas () can be toxic at elevated partial pressures, leading to convulsions and other health problems. Oxygen toxicity usually begins to occur at partial pressures more than 50 kilopascals (kPa), equal to about 50% oxygen composition at standard pressure or 2.5 times the normal sea-level partial pressure of about 21 kPa. This is not a problem except for patients on mechanical ventilators, since gas supplied through oxygen masks in medical applications is typically composed of only 30–50% by volume (about 30 kPa at standard pressure).
At one time, premature babies were placed in incubators containing -rich air, but this practice was discontinued after some babies were blinded by the oxygen content being too high.
Breathing pure in space applications, such as in some modern space suits, or in early spacecraft such as Apollo, causes no damage due to the low total pressures used. In the case of spacesuits, the partial pressure in the breathing gas is, in general, about 30 kPa (1.4 times normal), and the resulting partial pressure in the astronaut's arterial blood is only marginally more than normal sea-level partial pressure.
Oxygen toxicity to the lungs and central nervous system can also occur in deep scuba diving and surface-supplied diving. Prolonged breathing of an air mixture with an partial pressure more than 60 kPa can eventually lead to permanent pulmonary fibrosis. Exposure to an partial pressure greater than 160 kPa (about 1.6 atm) may lead to convulsions (normally fatal for divers). Acute oxygen toxicity (causing seizures, its most feared effect for divers) can occur by breathing an air mixture with 21% at or more of depth; the same thing can occur by breathing 100% at only .
Combustion and other hazards
Highly concentrated sources of oxygen promote rapid combustion. Fire and explosion hazards exist when concentrated oxidants and fuels are brought into close proximity; an ignition event, such as heat or a spark, is needed to trigger combustion. Oxygen is the oxidant, not the fuel.
Concentrated will allow combustion to proceed rapidly and energetically. Steel pipes and storage vessels used to store and transmit both gaseous and liquid oxygen will act as a fuel; and therefore the design and manufacture of systems requires special training to ensure that ignition sources are minimized. The fire that killed the Apollo 1 crew in a launch pad test spread so rapidly because the capsule was pressurized with pure but at slightly more than atmospheric pressure, instead of the normal pressure that would be used in a mission.
Liquid oxygen spills, if allowed to soak into organic matter, such as wood, petrochemicals, and asphalt can cause these materials to detonate unpredictably on subsequent mechanical impact.
See also
Geological history of oxygen
Hypoxia (environmental) for depletion in aquatic ecology
Ocean deoxygenation
Hypoxia (medical), a lack of oxygen
Limiting oxygen concentration
Oxygen compounds
Oxygen plant
Oxygen sensor
Dark oxygen
Notes
References
General references
External links
Oxygen at The Periodic Table of Videos (University of Nottingham)
Oxidizing Agents > Oxygen
Oxygen (O2) Properties, Uses, Applications
Roald Hoffmann article on "The Story of O"
WebElements.com – Oxygen
Scripps Institute: Atmospheric Oxygen has been dropping for 20 years
Chemical elements
Diatomic nonmetals
Reactive nonmetals
Chalcogens
Chemical substances for emergency medicine
Breathing gases
E-number additives
Oxidizing agents | Oxygen | [
"Physics",
"Chemistry",
"Materials_science"
] | 10,038 | [
"Chemical elements",
"Redox",
"Diatomic nonmetals",
"Chemical substances for emergency medicine",
"Nonmetals",
"Oxidizing agents",
"Reactive nonmetals",
"Chemicals in medicine",
"Atoms",
"Matter"
] |
22,483 | https://en.wikipedia.org/wiki/Optics | Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Light is a type of electromagnetic radiation, and other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.
Most optical phenomena can be accounted for by using the classical electromagnetic description of light, however complete electromagnetic descriptions of light are often difficult to apply in practice. Practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the ray-based model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation.
Some phenomena depend on light having both wave-like and particle-like properties. Explanation of these effects requires quantum mechanics. When considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems.
Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, and medicine (particularly ophthalmology and optometry, in which it is called physiological optics). Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics.
History
Optics began with the development of lenses by the ancient Egyptians and Mesopotamians. The earliest known lenses, made from polished crystal, often quartz, date from as early as 2000 BC from Crete (Archaeological Museum of Heraclion, Greece). Lenses from Rhodes date around 700 BC, as do Assyrian lenses such as the Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses. These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, and the development of geometrical optics in the Greco-Roman world. The word optics comes from the ancient Greek word , .
Greek philosophy on optics broke down into two opposing theories on how vision worked, the intromission theory and the emission theory. The intromission approach saw vision as coming from objects casting off copies of themselves (called eidola) that were captured by the eye. With many propagators including Democritus, Epicurus, Aristotle and their followers, this theory seems to have some contact with modern theories of what vision really is, but it remained only speculation lacking any experimental foundation.
Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in Timaeus. Some hundred years later, Euclid (4th–3rd century BC) wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics. He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. Euclid stated the principle of shortest trajectory of light, and considered multiple reflections on flat and spherical mirrors.
Ptolemy, in his treatise Optics, held an extramission-intromission theory of vision: the rays (or flux) from the eye formed a cone, the vertex being within the eye, and the base defining the visual field. The rays were sensitive, and conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarized much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence. Plutarch (1st–2nd century AD) described multiple reflections on spherical mirrors and discussed the creation of magnified and reduced images, both real and imaginary, including the case of chirality of the images.
During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi (–873) who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena. In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses", correctly describing a law of refraction equivalent to Snell's law. He used this law to compute optimum shapes for lenses and curved mirrors. In the early 11th century, Alhazen (Ibn al-Haytham) wrote the Book of Optics (Kitab al-manazir) in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment. He rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, and instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and then entered the eye, although he was unable to correctly explain how the eye captured the rays. Alhazen's work was largely ignored in the Arabic world but it was anonymously translated into Latin around 1200 A.D. and further summarised and expanded on by the Polish monk Witelo making it a standard text on optics in Europe for the next 400 years.
In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, and discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, and a theology of light, basing it on the works of Aristotle and Platonism. Grosseteste's most famous disciple, Roger Bacon, wrote works citing a wide range of recently translated optical and philosophical works, including those of Alhazen, Aristotle, Avicenna, Averroes, Euclid, al-Kindi, Ptolemy, Tideus, and Constantine the African. Bacon was able to use parts of glass spheres as magnifying glasses to demonstrate that light reflects from objects rather than being released from them.
The first wearable eyeglasses were invented in Italy around 1286.
This was the start of the optical industry of grinding and polishing lenses for these "spectacles", first in Venice and Florence in the thirteenth century, and later in the spectacle making centres in both the Netherlands and Germany. Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses rather than using the rudimentary optical theory of the day (theory which for the most part could not even adequately explain how spectacles worked). This practical development, mastery, and experimentation with lenses led directly to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle making centres in the Netherlands.
In the early 17th century, Johannes Kepler expanded on geometric optics in his writings, covering lenses, reflection by flat and curved mirrors, the principles of pinhole cameras, inverse-square law governing the intensity of light, and the optical explanations of astronomical phenomena such as lunar and solar eclipses and astronomical parallax. He was also able to correctly deduce the role of the retina as the actual organ that recorded images, finally being able to scientifically quantify the effects of different types of lenses that spectacle makers had been observing over the previous 300 years. After the invention of the telescope, Kepler set out the theoretical basis on how they worked and described an improved version, known as the Keplerian telescope, using two convex lenses to produce higher magnification.
Optical theory progressed in the mid-17th century with treatises written by philosopher René Descartes, which explained a variety of optical phenomena including reflection and refraction by assuming that light was emitted by objects which produced it. This differed substantively from the ancient Greek emission theory. In the late 1660s and early 1670s, Isaac Newton expanded Descartes's ideas into a corpuscle theory of light, famously determining that white light was a mix of colours that can be separated into its component parts with a prism. In 1690, Christiaan Huygens proposed a wave theory for light based on suggestions that had been made by Robert Hooke in 1664. Hooke himself publicly criticised Newton's theories of light and the feud between the two lasted until Hooke's death. In 1704, Newton published Opticks and, at the time, partly because of his success in other areas of physics, he was generally considered to be the victor in the debate over the nature of light.
Newtonian optics was generally accepted until the early 19th century when Thomas Young and Augustin-Jean Fresnel conducted experiments on the interference of light that firmly established light's wave nature. Young's famous double slit experiment showed that light followed the superposition principle, which is a wave-like property not predicted by Newton's corpuscle theory. This work led to a theory of diffraction for light and opened an entire area of study in physical optics. Wave optics was successfully unified with electromagnetic theory by James Clerk Maxwell in the 1860s.
The next development in optical theory came in 1899 when Max Planck correctly modelled blackbody radiation by assuming that the exchange of energy between light and matter only occurred in discrete amounts he called quanta. In 1905, Albert Einstein published the theory of the photoelectric effect that firmly established the quantization of light itself. In 1913, Niels Bohr showed that atoms could only emit discrete amounts of energy, thus explaining the discrete lines seen in emission and absorption spectra. The understanding of the interaction between light and matter that followed from these developments not only formed the basis of quantum optics but also was crucial for the development of quantum mechanics as a whole. The ultimate culmination, the theory of quantum electrodynamics, explains all optics and electromagnetic processes in general as the result of the exchange of real and virtual photons. Quantum optics gained practical importance with the inventions of the maser in 1953 and of the laser in 1960.
Following the work of Paul Dirac in quantum field theory, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light.
Classical optics
Classical optics is divided into two main branches: geometrical (or ray) optics and physical (or wave) optics. In geometrical optics, light is considered to travel in straight lines, while in physical optics, light is considered as an electromagnetic wave.
Geometrical optics can be viewed as an approximation of physical optics that applies when the wavelength of the light used is much smaller than the size of the optical elements in the system being modelled.
Geometrical optics
Geometrical optics, or ray optics, describes the propagation of light in terms of "rays" which travel in straight lines, and whose paths are governed by the laws of reflection and refraction at interfaces between different media. These laws were discovered empirically as far back as 984 AD and have been used in the design of optical components and instruments from then until the present day. They can be summarised as follows:
When a ray of light hits the boundary between two transparent materials, it is divided into a reflected and a refracted ray.
The law of reflection says that the reflected ray lies in the plane of incidence, and the angle of reflection equals the angle of incidence.
The law of refraction says that the refracted ray lies in the plane of incidence, and the sine of the angle of incidence divided by the sine of the angle of refraction is a constant: where is a constant for any two materials and a given colour of light. If the first material is air or vacuum, is the refractive index of the second material.
The laws of reflection and refraction can be derived from Fermat's principle which states that the path taken between two points by a ray of light is the path that can be traversed in the least time.
Approximations
Geometric optics is often simplified by making the paraxial approximation, or "small angle approximation". The mathematical behaviour then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing, which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications.
Reflections
Reflections can be divided into two types: specular reflection and diffuse reflection. Specular reflection describes the gloss of surfaces such as mirrors, which reflect light in a simple, predictable way. This allows for the production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. Diffuse reflection describes non-glossy materials, such as paper or rock. The reflections from these surfaces can only be described statistically, with the exact distribution of the reflected light depending on the microscopic structure of the material. Many diffuse reflectors are described or can be approximated by Lambert's cosine law, which describes surfaces that have equal luminance when viewed from any angle. Glossy surfaces can give both specular and diffuse reflection.
In specular reflection, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays and the normal lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. This is known as the Law of Reflection.
For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. The law also implies that mirror images are parity inverted, which we perceive as a left-right inversion. Images formed from reflection in two (or any even number of) mirrors are not parity inverted. Corner reflectors produce reflected rays that travel back in the direction from which the incident rays came. This is called retroreflection.
Mirrors with curved surfaces can be modelled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with a magnification greater than or less than one, and the magnification can be negative, indicating that the image is inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen.
Refractions
Refraction occurs when light travels through an area of space that has a changing index of refraction; this principle allows for lenses and the focusing of light. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction and another medium with index of refraction . In such situations, Snell's Law describes the resulting deflection of the light ray:
where and are the angles between the normal (to the interface) and the incident and refracted waves, respectively.
The index of refraction of a medium is related to the speed, , of light in that medium by
where is the speed of light in vacuum.
Snell's Law can be used to predict the deflection of light rays as they pass through linear media as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. In most materials, the index of refraction varies with the frequency of the light, known as dispersion. Taking this into account, Snell's Law can be used to predict how a prism will disperse light into a spectrum. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton.
Some media have an index of refraction which varies gradually with position and, therefore, light rays in the medium are curved. This effect is responsible for mirages seen on hot days: a change in index of refraction air with height causes light rays to bend, creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Optical materials with varying indexes of refraction are called gradient-index (GRIN) materials. Such materials are used to make gradient-index optics.
For light rays travelling from a material with a high index of refraction to a material with a low index of refraction, Snell's law predicts that there is no when is large. In this case, no transmission occurs; all the light is reflected. This phenomenon is called total internal reflection and allows for fibre optics technology. As light travels down an optical fibre, it undergoes total internal reflection allowing for essentially no light to be lost over the length of the cable.
Lenses
A device that produces converging or diverging light rays due to refraction is known as a lens. Lenses are characterized by their focal length: a converging lens has positive focal length, while a diverging lens has negative focal length. Smaller focal length indicates that the lens has a stronger converging or diverging effect. The focal length of a simple lens in air is given by the lensmaker's equation.
Ray tracing can be used to show how images are formed by a lens. For a thin lens in air, the location of the image is given by the simple equation
where is the distance from the object to the lens, is the distance from the lens to the image, and is the focal length of the lens. In the sign convention used here, the object and image distances are positive if the object and image are on opposite sides of the lens.
Incoming parallel rays are focused by a converging lens onto a spot one focal length from the lens, on the far side of the lens. This is called the rear focal point of the lens. Rays from an object at a finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens.
With diverging lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at a spot one focal length in front of the lens. This is the lens's front focal point. Rays from an object at a finite distance are associated with a virtual image that is closer to the lens than the focal point, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. As with mirrors, upright images produced by a single lens are virtual, while inverted images are real.
Lenses suffer from aberrations that distort images. Monochromatic aberrations occur because the geometry of the lens does not perfectly direct rays from each object point to a single point on the image, while chromatic aberration occurs because the index of refraction of the lens varies with the wavelength of the light.
Physical optics
In physical optics, light is considered to propagate as waves. This model predicts phenomena such as interference and diffraction, which are not explained by geometric optics. The speed of light waves in air is approximately 3.0×108 m/s (exactly 299,792,458 m/s in vacuum). The wavelength of visible light waves varies between 400 and 700 nm, but the term "light" is also often applied to infrared (0.7–300 μm) and ultraviolet radiation (10–400 nm).
The wave model can be used to make predictions about how an optical system will behave without requiring an explanation of what is "waving" in what medium. Until the middle of the 19th century, most physicists believed in an "ethereal" medium in which the light disturbance propagated. The existence of electromagnetic waves was predicted in 1865 by Maxwell's equations. These waves propagate at the speed of light and have varying electric and magnetic fields which are orthogonal to one another, and also to the direction of propagation of the waves. Light waves are now generally treated as electromagnetic waves except when quantum mechanical effects have to be considered.
Modelling and design of optical systems using physical optics
Many simplified approximations are available for analysing and designing optical systems. Most of these use a single scalar quantity to represent the electric field of the light wave, rather than using a vector model with orthogonal electric and magnetic vectors.
The Huygens–Fresnel equation is one such model. This was derived empirically by Fresnel in 1815, based on Huygens' hypothesis that each point on a wavefront generates a secondary spherical wavefront, which Fresnel combined with the principle of superposition of waves. The Kirchhoff diffraction equation, which is derived using Maxwell's equations, puts the Huygens-Fresnel equation on a firmer physical foundation. Examples of the application of Huygens–Fresnel principle can be found in the articles on diffraction and Fraunhofer diffraction.
More rigorous models, involving the modelling of both electric and magnetic fields of the light wave, are required when dealing with materials whose electric and magnetic properties affect the interaction of light with the material. For instance, the behaviour of a light wave interacting with a metal surface is quite different from what happens when it interacts with a dielectric material. A vector model must also be used to model polarised light.
Numerical modeling techniques such as the finite element method, the boundary element method and the transmission-line matrix method can be used to model the propagation of light in systems which cannot be solved analytically. Such models are computationally demanding and are normally only used to solve small-scale problems that require accuracy beyond that which can be achieved with analytical solutions.
All of the results from geometrical optics can be recovered using the techniques of Fourier optics which apply many of the same mathematical and analytical techniques used in acoustic engineering and signal processing.
Gaussian beam propagation is a simple paraxial physical optics model for the propagation of coherent radiation such as laser beams. This technique partially accounts for diffraction, allowing accurate calculations of the rate at which a laser beam expands with distance, and the minimum size to which the beam can be focused. Gaussian beam propagation thus bridges the gap between geometric and physical optics.
Superposition and interference
In the absence of nonlinear effects, the superposition principle can be used to predict the shape of interacting waveforms through the simple addition of the disturbances. This interaction of waves to produce a resulting pattern is generally termed "interference" and can result in a variety of outcomes. If two waves of the same wavelength and frequency are in phase, both the wave crests and wave troughs align. This results in constructive interference and an increase in the amplitude of the wave, which for light is associated with a brightening of the waveform in that location. Alternatively, if the two waves of the same wavelength and frequency are out of phase, then the wave crests will align with wave troughs and vice versa. This results in destructive interference and a decrease in the amplitude of the wave, which for light is associated with a dimming of the waveform at that location. See below for an illustration of this effect.
Since the Huygens–Fresnel principle states that every point of a wavefront is associated with the production of a new disturbance, it is possible for a wavefront to interfere with itself constructively or destructively at different locations producing bright and dark fringes in regular and predictable patterns. Interferometry is the science of measuring these patterns, usually as a means of making precise determinations of distances or angular resolutions. The Michelson interferometer was a famous instrument which used interference effects to accurately measure the speed of light.
The appearance of thin films and coatings is directly affected by interference effects. Antireflective coatings use destructive interference to reduce the reflectivity of the surfaces they coat, and can be used to minimise glare and unwanted reflections. The simplest case is a single layer with a thickness of one-fourth the wavelength of incident light. The reflected wave from the top of the film and the reflected wave from the film/material interface are then exactly 180° out of phase, causing destructive interference. The waves are only exactly out of phase for one wavelength, which would typically be chosen to be near the centre of the visible spectrum, around 550 nm. More complex designs using multiple layers can achieve low reflectivity over a broad band, or extremely low reflectivity at a single wavelength.
Constructive interference in thin films can create a strong reflection of light in a range of wavelengths, which can be narrow or broad depending on the design of the coating. These films are used to make dielectric mirrors, interference filters, heat reflectors, and filters for colour separation in colour television cameras. This interference effect is also what causes the colourful rainbow patterns seen in oil slicks.
Diffraction and optical resolution
Diffraction is the process by which light interference is most commonly observed. The effect was first described in 1665 by Francesco Maria Grimaldi, who also coined the term from the Latin . Later that century, Robert Hooke and Isaac Newton also described phenomena now known to be diffraction in Newton's rings while James Gregory recorded his observations of diffraction patterns from bird feathers.
The first physical optics model of diffraction that relied on the Huygens–Fresnel principle was developed in 1803 by Thomas Young in his interference experiments with the interference patterns of two closely spaced slits. Young showed that his results could only be explained if the two slits acted as two unique sources of waves rather than corpuscles. In 1815 and 1818, Augustin-Jean Fresnel firmly established the mathematics of how wave interference can account for diffraction.
The simplest physical models of diffraction use equations that describe the angular separation of light and dark fringes due to light of a particular wavelength (). In general, the equation takes the form
where is the separation between two wavefront sources (in the case of Young's experiments, it was two slits), is the angular separation between the central fringe and the order fringe, where the central maximum is .
This equation is modified slightly to take into account a variety of situations such as diffraction through a single gap, diffraction through multiple slits, or diffraction through a diffraction grating that contains a large number of slits at equal spacing. More complicated models of diffraction require working with the mathematics of Fresnel or Fraunhofer diffraction.
X-ray diffraction makes use of the fact that atoms in a crystal have regular spacing at distances that are on the order of one angstrom. To see diffraction patterns, x-rays with similar wavelengths to that spacing are passed through the crystal. Since crystals are three-dimensional objects rather than two-dimensional gratings, the associated diffraction pattern varies in two directions according to Bragg reflection, with the associated bright spots occurring in unique patterns and being twice the spacing between atoms.
Diffraction effects limit the ability of an optical detector to optically resolve separate light sources. In general, light that is passing through an aperture will experience diffraction and the best images that can be created (as described in diffraction-limited optics) appear as a central spot with surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk. The size of such a disk is given by where is the angular resolution, is the wavelength of the light, and is the diameter of the lens aperture. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius (measured to first null, that is, to the first place where no light is seen) can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the finer the resolution. Interferometry, with its ability to mimic extremely large baseline apertures, allows for the greatest angular resolution possible.
For astronomical imaging, the atmosphere prevents optimal resolution from being achieved in the visible spectrum due to the atmospheric scattering and dispersion which cause stars to twinkle. Astronomers refer to this effect as the quality of astronomical seeing. Techniques known as adaptive optics have been used to eliminate the atmospheric disruption of images and achieve results that approach the diffraction limit.
Dispersion and scattering
Refractive processes take place in the physical optics limit, where the wavelength of light is similar to other distances, as a kind of scattering. The simplest type of scattering is Thomson scattering which occurs when electromagnetic waves are deflected by single particles. In the limit of Thomson scattering, in which the wavelike nature of light is evident, light is dispersed independent of the frequency, in contrast to Compton scattering which is frequency-dependent and strictly a quantum mechanical process, involving the nature of light as particles. In a statistical sense, elastic scattering of light by numerous particles much smaller than the wavelength of the light is a process known as Rayleigh scattering while the similar process for scattering by particles that are similar or larger in wavelength is known as Mie scattering with the Tyndall effect being a commonly observed result. A small proportion of light scattering from atoms or molecules may undergo Raman scattering, wherein the frequency changes due to excitation of the atoms and molecules. Brillouin scattering occurs when the frequency of light changes due to local changes with time and movements of a dense material.
Dispersion occurs when different frequencies of light have different phase velocities, due either to material properties (material dispersion) or to the geometry of an optical waveguide (waveguide dispersion). The most familiar form of dispersion is a decrease in index of refraction with increasing wavelength, which is seen in most transparent materials. This is called "normal dispersion". It occurs in all dielectric materials, in wavelength ranges where the material does not absorb light. In wavelength ranges where a medium has significant absorption, the index of refraction can increase with wavelength. This is called "anomalous dispersion".
The separation of colours by a prism is an example of normal dispersion. At the surfaces of the prism, Snell's law predicts that light incident at an angle to the normal will be refracted at an angle . Thus, blue light, with its higher refractive index, is bent more strongly than red light, resulting in the well-known rainbow pattern.
Material dispersion is often characterised by the Abbe number, which gives a simple measure of dispersion based on the index of refraction at three specific wavelengths. Waveguide dispersion is dependent on the propagation constant. Both kinds of dispersion cause changes in the group characteristics of the wave, the features of the wave packet that change with the same frequency as the amplitude of the electromagnetic wave. "Group velocity dispersion" manifests as a spreading-out of the signal "envelope" of the radiation and can be quantified with a group dispersion delay parameter:
where is the group velocity. For a uniform medium, the group velocity is
where is the index of refraction and is the speed of light in a vacuum. This gives a simpler form for the dispersion delay parameter:
If is less than zero, the medium is said to have positive dispersion or normal dispersion. If is greater than zero, the medium has negative dispersion. If a light pulse is propagated through a normally dispersive medium, the result is the higher frequency components slow down more than the lower frequency components. The pulse therefore becomes positively chirped, or up-chirped, increasing in frequency with time. This causes the spectrum coming out of a prism to appear with red light the least refracted and blue/violet light the most refracted. Conversely, if a pulse travels through an anomalously (negatively) dispersive medium, high-frequency components travel faster than the lower ones, and the pulse becomes negatively chirped, or down-chirped, decreasing in frequency with time.
The result of group velocity dispersion, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fibres, since if dispersion is too high, a group of pulses representing information will each spread in time and merge, making it impossible to extract the signal.
Polarisation
Polarisation is a general property of waves that describes the orientation of their oscillations. For transverse waves such as many electromagnetic waves, it describes the orientation of the oscillations in the plane perpendicular to the wave's direction of travel. The oscillations may be oriented in a single direction (linear polarisation), or the oscillation direction may rotate as the wave travels (circular or elliptical polarisation). Circularly polarised waves can rotate rightward or leftward in the direction of travel, and which of those two rotations is present in a wave is called the wave's chirality.
The typical way to consider polarisation is to keep track of the orientation of the electric field vector as the electromagnetic wave propagates. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled and (with indicating the direction of travel). The shape traced out in the x-y plane by the electric field vector is a Lissajous figure that describes the polarisation state. The following figures show some examples of the evolution of the electric field vector (blue), with time (the vertical axes), at a particular point in space, along with its and components (red/left and green/right), and the path traced by the vector in the plane (purple): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation.
In the leftmost figure above, the and components of the light wave are in phase. In this case, the ratio of their strengths is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarisation. The direction of this line depends on the relative amplitudes of the two components.
In the middle figure, the two orthogonal components have the same amplitudes and are 90° out of phase. In this case, one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the component can be 90° ahead of the component or it can be 90° behind the component. In this special case, the electric vector traces out a circle in the plane, so this polarisation is called circular polarisation. The rotation direction in the circle depends on which of the two-phase relationships exists and corresponds to right-hand circular polarisation and left-hand circular polarisation.
In all other cases, where the two components either do not have the same amplitudes and/or their phase difference is neither zero nor a multiple of 90°, the polarisation is called elliptical polarisation because the electric vector traces out an ellipse in the plane (the polarisation ellipse). This is shown in the above figure on the right. Detailed mathematics of polarisation is done using Jones calculus and is characterised by the Stokes parameters.
Changing polarisation
Media that have different indexes of refraction for different polarisation modes are called birefringent. Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarised images of whatever is viewed through them. It was this effect that provided the first discovery of polarisation, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarisation state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colours and rainbow-like effects. In mineralogy, such properties, known as pleochroism, are frequently exploited for the purpose of identifying minerals using polarisation microscopes. Additionally, many plastics that are not normally birefringent will become so when subject to mechanical stress, a phenomenon which is the basis of photoelasticity. Non-birefringent methods, to rotate the linear polarisation of light beams, include the use of prismatic polarisation rotators which use total internal reflection in a prism set designed for efficient collinear transmission.
Media that reduce the amplitude of certain polarisation modes are called dichroic, with devices that block nearly all of the radiation in one mode known as polarising filters or simply "polarisers". Malus' law, which is named after Étienne-Louis Malus, says that when a perfect polariser is placed in a linear polarised beam of light, the intensity, , of the light that passes through is given by
where is the initial intensity, and is the angle between the light's initial polarisation direction and the axis of the polariser.
A beam of unpolarised light can be thought of as containing a uniform mixture of linear polarisations at all possible angles. Since the average value of is 1/2, the transmission coefficient becomes
In practice, some light is lost in the polariser and the actual transmission of unpolarised light will be somewhat lower than this, around 38% for Polaroid-type polarisers but considerably higher (>49.9%) for some birefringent prism types.
In addition to birefringence and dichroism in extended media, polarisation effects can also occur at the (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on the angle of incidence and the angle of refraction. In this way, physical optics recovers Brewster's angle. When light reflects from a thin film on a surface, interference between the reflections from the film's surfaces can produce polarisation in the reflected and transmitted light.
Natural light
Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated, in which case the light is said to be unpolarised. If there is partial correlation between the emitters, the light is partially polarised. If the polarisation is consistent across the spectrum of the source, partially polarised light can be described as a superposition of a completely unpolarised component, and a completely polarised one. One may then describe the light in terms of the degree of polarisation, and the parameters of the polarisation ellipse.
Light reflected by shiny transparent materials is partly or fully polarised, except when the light is normal (perpendicular) to the surface. It was this effect that allowed the mathematician Étienne-Louis Malus to make the measurements that allowed for his development of the first mathematical models for polarised light. Polarisation occurs when light is scattered in the atmosphere. The scattered light produces the brightness and colour in clear skies. This partial polarisation of scattered light can be taken advantage of using polarising filters to darken the sky in photographs. Optical polarisation is principally of importance in chemistry due to circular dichroism and optical rotation (circular birefringence) exhibited by optically active (chiral) molecules.
Modern optics
Modern optics encompasses the areas of optical science and engineering that became popular in the 20th century. These areas of optical science typically relate to the electromagnetic or quantum properties of light but do include other topics. A major subfield of modern optics, quantum optics, deals with specifically quantum mechanical properties of light. Quantum optics is not just theoretical; some modern devices, such as lasers, have principles of operation that depend on quantum mechanics. Light detectors, such as photomultipliers and channeltrons, respond to individual photons. Electronic image sensors, such as CCDs, exhibit shot noise corresponding to the statistics of individual photon events. Light-emitting diodes and photovoltaic cells, too, cannot be understood without quantum mechanics. In the study of these devices, quantum optics often overlaps with quantum electronics.
Specialty areas of optics research include the study of how light interacts with specific materials as in crystal optics and metamaterials. Other research focuses on the phenomenology of electromagnetic waves as in singular optics, non-imaging optics, non-linear optics, statistical optics, and radiometry. Additionally, computer engineers have taken an interest in integrated optics, machine vision, and photonic computing as possible components of the "next generation" of computers.
Today, the pure science of optics is called optical science or optical physics to distinguish it from applied optical sciences, which are referred to as optical engineering. Prominent subfields of optical engineering include illumination engineering, photonics, and optoelectronics with practical applications like lens design, fabrication and testing of optical components, and image processing. Some of these fields overlap, with nebulous boundaries between the subjects' terms that mean slightly different things in different parts of the world and in different areas of industry. A professional community of researchers in nonlinear optics has developed in the last several decades due to advances in laser technology.
Lasers
A laser is a device that emits light, a kind of electromagnetic radiation, through a process called stimulated emission. The term laser is an acronym for . Laser light is usually spatially coherent, which means that the light either is emitted in a narrow, low-divergence beam, or can be converted into one with the help of optical components such as lenses. Because the microwave equivalent of the laser, the maser, was developed first, devices that emit microwave and radio frequencies are usually called masers.
The first working laser was demonstrated on 16 May 1960 by Theodore Maiman at Hughes Research Laboratories. When first invented, they were called "a solution looking for a problem". Since then, lasers have become a multibillion-dollar industry, finding utility in thousands of highly varied applications. The first application of lasers visible in the daily lives of the general population was the supermarket barcode scanner, introduced in 1974. The laserdisc player, introduced in 1978, was the first successful consumer product to include a laser, but the compact disc player was the first laser-equipped device to become truly common in consumers' homes, beginning in 1982. These optical storage devices use a semiconductor laser less than a millimetre wide to scan the surface of the disc for data retrieval. Fibre-optic communication relies on lasers to transmit large amounts of information at the speed of light. Other common applications of lasers include laser printers and laser pointers. Lasers are used in medicine in areas such as bloodless surgery, laser eye surgery, and laser capture microdissection and in military applications such as missile defence systems, electro-optical countermeasures (EOCM), and lidar. Lasers are also used in holograms, bubblegrams, laser light shows, and laser hair removal.
Kapitsa–Dirac effect
The Kapitsa–Dirac effect causes beams of particles to diffract as the result of meeting a standing wave of light. Light can be used to position matter using various phenomena (see optical tweezers).
Applications
Optics is part of everyday life. The ubiquity of visual systems in biology indicates the central role optics plays as the science of one of the five senses. Many people benefit from eyeglasses or contact lenses, and optics are integral to the functioning of many consumer goods including cameras. Rainbows and mirages are examples of optical phenomena. Optical communication provides the backbone for both the Internet and modern telephony.
Human eye
The human eye functions by focusing light onto a layer of photoreceptor cells called the retina, which forms the inner lining of the back of the eye. The focusing is accomplished by a series of transparent media. Light entering the eye passes first through the cornea, which provides much of the eye's optical power. The light then continues through the fluid just behind the cornea—the anterior chamber, then passes through the pupil. The light then passes through the lens, which focuses the light further and allows adjustment of focus. The light then passes through the main body of fluid in the eye—the vitreous humour, and reaches the retina. The cells in the retina line the back of the eye, except for where the optic nerve exits; this results in a blind spot.
There are two types of photoreceptor cells, rods and cones, which are sensitive to different aspects of light. Rod cells are sensitive to the intensity of light over a wide frequency range, thus are responsible for black-and-white vision. Rod cells are not present on the fovea, the area of the retina responsible for central vision, and are not as responsive as cone cells to spatial and temporal changes in light. There are, however, twenty times more rod cells than cone cells in the retina because the rod cells are present across a wider area. Because of their wider distribution, rods are responsible for peripheral vision.
In contrast, cone cells are less sensitive to the overall intensity of light, but come in three varieties that are sensitive to different frequency-ranges and thus are used in the perception of colour and photopic vision. Cone cells are highly concentrated in the fovea and have a high visual acuity meaning that they are better at spatial resolution than rod cells. Since cone cells are not as sensitive to dim light as rod cells, most night vision is limited to rod cells. Likewise, since cone cells are in the fovea, central vision (including the vision needed to do most reading, fine detail work such as sewing, or careful examination of objects) is done by cone cells.
Ciliary muscles around the lens allow the eye's focus to be adjusted. This process is known as accommodation. The near point and far point define the nearest and farthest distances from the eye at which an object can be brought into sharp focus. For a person with normal vision, the far point is located at infinity. The near point's location depends on how much the muscles can increase the curvature of the lens, and how inflexible the lens has become with age. Optometrists, ophthalmologists, and opticians usually consider an appropriate near point to be closer than normal reading distance—approximately 25 cm.
Defects in vision can be explained using optical principles. As people age, the lens becomes less flexible and the near point recedes from the eye, a condition known as presbyopia. Similarly, people suffering from hyperopia cannot decrease the focal length of their lens enough to allow for nearby objects to be imaged on their retina. Conversely, people who cannot increase the focal length of their lens enough to allow for distant objects to be imaged on the retina suffer from myopia and have a far point that is considerably closer than infinity. A condition known as astigmatism results when the cornea is not spherical but instead is more curved in one direction. This causes horizontally extended objects to be focused on different parts of the retina than vertically extended objects, and results in distorted images.
All of these conditions can be corrected using corrective lenses. For presbyopia and hyperopia, a converging lens provides the extra curvature necessary to bring the near point closer to the eye while for myopia a diverging lens provides the curvature necessary to send the far point to infinity. Astigmatism is corrected with a cylindrical surface lens that curves more strongly in one direction than in another, compensating for the non-uniformity of the cornea.
The optical power of corrective lenses is measured in diopters, a value equal to the reciprocal of the focal length measured in metres; with a positive focal length corresponding to a converging lens and a negative focal length corresponding to a diverging lens. For lenses that correct for astigmatism as well, three numbers are given: one for the spherical power, one for the cylindrical power, and one for the angle of orientation of the astigmatism.
Visual effects
Optical illusions (also called visual illusions) are characterized by visually perceived images that differ from objective reality. The information gathered by the eye is processed in the brain to give a percept that differs from the object being imaged. Optical illusions can be the result of a variety of phenomena including physical effects that create images that are different from the objects that make them, the physiological effects on the eyes and brain of excessive stimulation (e.g. brightness, tilt, colour, movement), and cognitive illusions where the eye and brain make unconscious inferences.
Cognitive illusions include some which result from the unconscious misapplication of certain optical principles. For example, the Ames room, Hering, Müller-Lyer, Orbison, Ponzo, Sander, and Wundt illusions all rely on the suggestion of the appearance of distance by using converging and diverging lines, in the same way that parallel light rays (or indeed any set of parallel lines) appear to converge at a vanishing point at infinity in two-dimensionally rendered images with artistic perspective. This suggestion is also responsible for the famous moon illusion where the moon, despite having essentially the same angular size, appears much larger near the horizon than it does at zenith. This illusion so confounded Ptolemy that he incorrectly attributed it to atmospheric refraction when he described it in his treatise, Optics.
Another type of optical illusion exploits broken patterns to trick the mind into perceiving symmetries or asymmetries that are not present. Examples include the café wall, Ehrenstein, Fraser spiral, Poggendorff, and Zöllner illusions. Related, but not strictly illusions, are patterns that occur due to the superimposition of periodic structures. For example, transparent tissues with a grid structure produce shapes known as moiré patterns, while the superimposition of periodic transparent patterns comprising parallel opaque lines or curves produces line moiré patterns.
Optical instruments
Single lenses have a variety of applications including photographic lenses, corrective lenses, and magnifying glasses while single mirrors are used in parabolic reflectors and rear-view mirrors. Combining a number of mirrors, prisms, and lenses produces compound optical instruments which have practical uses. For example, a periscope is simply two plane mirrors aligned to allow for viewing around obstructions. The most famous compound optical instruments in science are the microscope and the telescope which were both invented by the Dutch in the late 16th century.
Microscopes were first developed with just two lenses: an objective lens and an eyepiece. The objective lens is essentially a magnifying glass and was designed with a very small focal length while the eyepiece generally has a longer focal length. This has the effect of producing magnified images of close objects. Generally, an additional source of illumination is used since magnified images are dimmer due to the conservation of energy and the spreading of light rays over a larger surface area. Modern microscopes, known as compound microscopes have many lenses in them (typically four) to optimize the functionality and enhance image stability. A slightly different variety of microscope, the comparison microscope, looks at side-by-side images to produce a stereoscopic binocular view that appears three dimensional when used by humans.
The first telescopes, called refracting telescopes, were also developed with a single objective and eyepiece lens. In contrast to the microscope, the objective lens of the telescope was designed with a large focal length to avoid optical aberrations. The objective focuses an image of a distant object at its focal point which is adjusted to be at the focal point of an eyepiece of a much smaller focal length. The main goal of a telescope is not necessarily magnification, but rather the collection of light which is determined by the physical size of the objective lens. Thus, telescopes are normally indicated by the diameters of their objectives rather than by the magnification which can be changed by switching eyepieces. Because the magnification of a telescope is equal to the focal length of the objective divided by the focal length of the eyepiece, smaller focal-length eyepieces cause greater magnification.
Since crafting large lenses is much more difficult than crafting large mirrors, most modern telescopes are reflecting telescopes, that is, telescopes that use a primary mirror rather than an objective lens. The same general optical considerations apply to reflecting telescopes that applied to refracting telescopes, namely, the larger the primary mirror, the more light collected, and the magnification is still equal to the focal length of the primary mirror divided by the focal length of the eyepiece. Professional telescopes generally do not have eyepieces and instead place an instrument (often a charge-coupled device) at the focal point instead.
Photography
The optics of photography involves both lenses and the medium in which the electromagnetic radiation is recorded, whether it be a plate, film, or charge-coupled device. Photographers must consider the reciprocity of the camera and the shot which is summarized by the relation
Exposure ∝ ApertureArea × ExposureTime × SceneLuminance
In other words, the smaller the aperture (giving greater depth of focus), the less light coming in, so the length of time has to be increased (leading to possible blurriness if motion occurs). An example of the use of the law of reciprocity is the Sunny 16 rule which gives a rough estimate for the settings needed to estimate the proper exposure in daylight.
A camera's aperture is measured by a unitless number called the f-number or f-stop, #, often notated as , and given by
where is the focal length, and is the diameter of the entrance pupil. By convention, "#" is treated as a single symbol, and specific values of # are written by replacing the number sign with the value. The two ways to increase the f-stop are to either decrease the diameter of the entrance pupil or change to a longer focal length (in the case of a zoom lens, this can be done by simply adjusting the lens). Higher f-numbers also have a larger depth of field due to the lens approaching the limit of a pinhole camera which is able to focus all images perfectly, regardless of distance, but requires very long exposure times.
The field of view that the lens will provide changes with the focal length of the lens. There are three basic classifications based on the relationship to the diagonal size of the film or sensor size of the camera to the focal length of the lens:
Normal lens: angle of view of about 50° (called normal because this angle considered roughly equivalent to human vision) and a focal length approximately equal to the diagonal of the film or sensor.
Wide-angle lens: angle of view wider than 60° and focal length shorter than a normal lens.
Long focus lens: angle of view narrower than a normal lens. This is any lens with a focal length longer than the diagonal measure of the film or sensor. The most common type of long focus lens is the telephoto lens, a design that uses a special telephoto group to be physically shorter than its focal length.
Modern zoom lenses may have some or all of these attributes.
The absolute value for the exposure time required depends on how sensitive to light the medium being used is (measured by the film speed, or, for digital media, by the quantum efficiency). Early photography used media that had very low light sensitivity, and so exposure times had to be long even for very bright shots. As technology has improved, so has the sensitivity through film cameras and digital cameras.
Other results from physical and geometrical optics apply to camera optics. For example, the maximum resolution capability of a particular camera set-up is determined by the diffraction limit associated with the pupil size and given, roughly, by the Rayleigh criterion.
Atmospheric optics
The unique optical properties of the atmosphere cause a wide range of spectacular optical phenomena. The blue colour of the sky is a direct result of Rayleigh scattering which redirects higher frequency (blue) sunlight back into the field of view of the observer. Because blue light is scattered more easily than red light, the sun takes on a reddish hue when it is observed through a thick atmosphere, as during a sunrise or sunset. Additional particulate matter in the sky can scatter different colours at different angles creating colourful glowing skies at dusk and dawn. Scattering off of ice crystals and other particles in the atmosphere are responsible for halos, afterglows, coronas, rays of sunlight, and sun dogs. The variation in these kinds of phenomena is due to different particle sizes and geometries.
Mirages are optical phenomena in which light rays are bent due to thermal variations in the refraction index of air, producing displaced or heavily distorted images of distant objects. Other dramatic optical phenomena associated with this include the Novaya Zemlya effect where the sun appears to rise earlier than predicted with a distorted shape. A spectacular form of refraction occurs with a temperature inversion called the Fata Morgana where objects on the horizon or even beyond the horizon, such as islands, cliffs, ships or icebergs, appear elongated and elevated, like "fairy tale castles".
Rainbows are the result of a combination of internal reflection and dispersive refraction of light in raindrops. A single reflection off the backs of an array of raindrops produces a rainbow with an angular size on the sky that ranges from 40° to 42° with red on the outside. Double rainbows are produced by two internal reflections with angular size of 50.5° to 54° with violet on the outside. Because rainbows are seen with the sun 180° away from the centre of the rainbow, rainbows are more prominent the closer the sun is to the horizon.
See also
Ion optics
Important publications in optics
List of optical topics
List of textbooks in electromagnetism
References
Works cited
Further reading
External links
Relevant discussions
Textbooks and tutorials
Light and Matter – an open-source textbook, containing a treatment of optics in ch. 28–32
Optics2001 – Optics library and community
Fundamental Optics – Melles Griot Technical Guide
Physics of Light and Optics – Brigham Young University Undergraduate Book
Optics for PV – a step-by-step introduction to classical optics
Further reading
Optics and photonics: Physics enhancing our lives by Institute of Physics publications
Societies
European Optical Society – link
The Optical Society (OSA) – link
SPIE – link
European Photonics Industry Consortium – link
>
>
Applied and interdisciplinary physics
Atomic, molecular, and optical physics
Natural philosophy | Optics | [
"Physics",
"Chemistry"
] | 12,362 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Optics",
"Spectrum (physical sciences)",
"Electromagnetic radiation",
"Electromagnetic spectrum",
"Waves",
"Radiation",
"Light",
" molecular",
"Atomic",
" and optical physics"
] |
22,526 | https://en.wikipedia.org/wiki/Organometallic%20chemistry | Organometallic chemistry is the study of organometallic compounds, chemical compounds containing at least one chemical bond between a carbon atom of an organic molecule and a metal, including alkali, alkaline earth, and transition metals, and sometimes broadened to include metalloids like boron, silicon, and selenium, as well. Aside from bonds to organyl fragments or molecules, bonds to 'inorganic' carbon, like carbon monoxide (metal carbonyls), cyanide, or carbide, are generally considered to be organometallic as well. Some related compounds such as transition metal hydrides and metal phosphine complexes are often included in discussions of organometallic compounds, though strictly speaking, they are not necessarily organometallic. The related but distinct term "metalorganic compound" refers to metal-containing compounds lacking direct metal-carbon bonds but which contain organic ligands. Metal β-diketonates, alkoxides, dialkylamides, and metal phosphine complexes are representative members of this class. The field of organometallic chemistry combines aspects of traditional inorganic and organic chemistry.
Organometallic compounds are widely used both stoichiometrically in research and industrial chemical reactions, as well as in the role of catalysts to increase the rates of such reactions (e.g., as in uses of homogeneous catalysis), where target molecules include polymers, pharmaceuticals, and many other types of practical products.
Organometallic compounds
Organometallic compounds are distinguished by the prefix "organo-" (e.g., organopalladium compounds), and include all compounds which contain a bond between a metal atom and a carbon atom of an organyl group. In addition to the traditional metals (alkali metals, alkali earth metals, transition metals, and post transition metals), lanthanides, actinides, semimetals, and the elements boron, silicon, arsenic, and selenium are considered to form organometallic compounds. Examples of organometallic compounds include Gilman reagents, which contain lithium and copper, and Grignard reagents, which contain magnesium. Boron-containing organometallic compounds are often the result of hydroboration and carboboration reactions. Tetracarbonyl nickel and ferrocene are examples of organometallic compounds containing transition metals. Other examples of organometallic compounds include organolithium compounds such as n-butyllithium (n-BuLi), organozinc compounds such as diethylzinc (Et2Zn), organotin compounds such as tributyltin hydride (Bu3SnH), organoborane compounds such as triethylborane (Et3B), and organoaluminium compounds such as trimethylaluminium (Me3Al).
A naturally occurring organometallic complex is methylcobalamin (a form of Vitamin B12), which contains a cobalt-methyl bond. This complex, along with other biologically relevant complexes are often discussed within the subfield of bioorganometallic chemistry.
Distinction from coordination compounds with organic ligands
Many complexes feature coordination bonds between a metal and organic ligands. Complexes where the organic ligands bind the metal through a heteroatom such as oxygen or nitrogen are considered coordination compounds (e.g., heme A and Fe(acac)3). However, if any of the ligands form a direct metal-carbon (M-C) bond, then the complex is considered to be organometallic. Although the IUPAC has not formally defined the term, some chemists use the term "metalorganic" to describe any coordination compound containing an organic ligand regardless of the presence of a direct M-C bond.
The status of compounds in which the canonical anion has a negative charge that is shared between (delocalized) a carbon atom and an atom more electronegative than carbon (e.g. enolates) may vary with the nature of the anionic moiety, the metal ion, and possibly the medium. In the absence of direct structural evidence for a carbon–metal bond, such compounds are not considered to be organometallic. For instance, lithium enolates often contain only Li-O bonds and are not organometallic, while zinc enolates (Reformatsky reagents) contain both Zn-O and Zn-C bonds, and are organometallic in nature.
Structure and properties
The metal-carbon bond in organometallic compounds is generally highly covalent. For highly electropositive elements, such as lithium and sodium, the carbon ligand exhibits carbanionic character, but free carbon-based anions are extremely rare, an example being cyanide.
Most organometallic compounds are solids at room temperature, however some are liquids such as methylcyclopentadienyl manganese tricarbonyl, or even volatile liquids such as nickel tetracarbonyl. Many organometallic compounds are air sensitive (reactive towards oxygen and moisture), and thus they must be handled under an inert atmosphere. Some organometallic compounds such as triethylaluminium are pyrophoric and will ignite on contact with air.
Concepts and techniques
As in other areas of chemistry, electron counting is useful for organizing organometallic chemistry. The 18-electron rule is helpful in predicting the stabilities of organometallic complexes, for example metal carbonyls and metal hydrides. The 18e rule has two representative electron counting models, ionic and neutral (also known as covalent) ligand models, respectively. The hapticity of a metal-ligand complex, can influence the electron count. Hapticity (η, lowercase Greek eta), describes the number of contiguous ligands coordinated to a metal. For example, ferrocene, [(η5-C5H5)2Fe], has two cyclopentadienyl ligands giving a hapticity of 5, where all five carbon atoms of the C5H5 ligand bond equally and contribute one electron to the iron center. Ligands that bind non-contiguous atoms are denoted the Greek letter kappa, κ. Chelating κ2-acetate is an example. The covalent bond classification method identifies three classes of ligands, X,L, and Z; which are based on the electron donating interactions of the ligand. Many organometallic compounds do not follow the 18e rule. The metal atoms in organometallic compounds are frequently described by their d electron count and oxidation state. These concepts can be used to help predict their reactivity and preferred geometry. Chemical bonding and reactivity in organometallic compounds is often discussed from the perspective of the isolobal principle.
A wide variety of physical techniques are used to determine the structure, composition, and properties of organometallic compounds. X-ray diffraction is a particularly important technique that can locate the positions of atoms within a solid compound, providing a detailed description of its structure. Other techniques like infrared spectroscopy and nuclear magnetic resonance spectroscopy are also frequently used to obtain information on the structure and bonding of organometallic compounds. Ultraviolet-visible spectroscopy is a common technique used to obtain information on the electronic structure of organometallic compounds. It is also used monitor the progress of organometallic reactions, as well as determine their kinetics. The dynamics of organometallic compounds can be studied using dynamic NMR spectroscopy. Other notable techniques include X-ray absorption spectroscopy, electron paramagnetic resonance spectroscopy, and elemental analysis.
Due to their high reactivity towards oxygen and moisture, organometallic compounds often must be handled using air-free techniques. Air-free handling of organometallic compounds typically requires the use of laboratory apparatuses such as a glovebox or Schlenk line.
History
Early developments in organometallic chemistry include Louis Claude Cadet's synthesis of methyl arsenic compounds related to cacodyl, William Christopher Zeise's platinum-ethylene complex, Edward Frankland's discovery of diethyl- and dimethylzinc, Ludwig Mond's discovery of Ni(CO)4, and Victor Grignard's organomagnesium compounds. (Although not always acknowledged as an organometallic compound, Prussian blue, a mixed-valence iron-cyanide complex, was first prepared in 1706 by paint maker Johann Jacob Diesbach as the first coordination polymer and synthetic material containing a metal-carbon bond.) The abundant and diverse products from coal and petroleum led to Ziegler–Natta, Fischer–Tropsch, hydroformylation catalysis which employ CO, H2, and alkenes as feedstocks and ligands.
Recognition of organometallic chemistry as a distinct subfield culminated in the Nobel Prizes to Ernst Fischer and Geoffrey Wilkinson for work on metallocenes. In 2005, Yves Chauvin, Robert H. Grubbs and Richard R. Schrock shared the Nobel Prize for metal-catalyzed olefin metathesis.
Organometallic chemistry timeline
1760 Louis Claude Cadet de Gassicourt isolates the organoarenic compound cacodyl
1827 William Christopher Zeise produces Zeise's salt; the first platinum / olefin complex
1848 Edward Frankland discovers diethylzinc
1890 Ludwig Mond discovers nickel carbonyl
1899 John Ulric Nef discovers alkynylation using sodium acetylides.
1909 Paul Ehrlich introduces Salvarsan for the treatment of syphilis, an early arsenic based organometallic compound
1912 Nobel Prize Victor Grignard and Paul Sabatier
1930 Henry Gilman invents lithium cuprates, see Gilman reagent
1940 Eugene G. Rochow and Richard Müller discover the direct process for preparing organosilicon compounds
1930's and 1940's Otto Roelen and Walter Reppe develop metal-catalyzed hydroformylation and acetylene chemistry
1951 Walter Hieber was awarded the Alfred Stock prize for his work with metal carbonyl chemistry.
1951 Ferrocene is discovered
1956 Dorothy Crawfoot Hodgkin determines the structure of vitamin B12, the first biomolecule found to contain a metal-carbon bond, see bioorganometallic chemistry
1963 Nobel prize for Karl Ziegler and Giulio Natta on Ziegler–Natta catalyst
1973 Nobel prize Geoffrey Wilkinson and Ernst Otto Fischer on sandwich compounds
1981 Nobel prize Roald Hoffmann and Kenichi Fukui for creation of the Woodward-Hoffman Rules
2001 Nobel prize W. S. Knowles, R. Noyori and Karl Barry Sharpless for asymmetric hydrogenation
2005 Nobel prize Yves Chauvin, Robert Grubbs, and Richard Schrock on metal-catalyzed alkene metathesis
2010 Nobel prize Richard F. Heck, Ei-ichi Negishi, Akira Suzuki for palladium catalyzed cross coupling reactions
Scope
Subspecialty areas of organometallic chemistry include:
Period 2 elements: organolithium chemistry, organoberyllium chemistry, organoborane chemistry
Period 3 elements: organosodium chemistry, organomagnesium chemistry, organoaluminium chemistry, organosilicon chemistry
Period 4 elements: organocalcium chemistry, organoscandium chemistry, organotitanium chemistry, organovanadium chemistry, organochromium chemistry, organomanganese chemistry, organoiron chemistry, organocobalt chemistry, organonickel chemistry, organocopper chemistry, organozinc chemistry, organogallium chemistry, organogermanium chemistry, organoarsenic chemistry, organoselenium chemistry
Period 5 elements: organoyttrium chemistry, organozirconium chemistry, organoniobium chemistry, organomolybdenum chemistry, organotechnetium chemistry, organoruthenium chemistry, organorhodium chemistry, organopalladium chemistry, organosilver chemistry, organocadmium chemistry, organoindium chemistry, organotin chemistry, organoantimony chemistry, organotellurium chemistry
Period 6 elements: organolanthanide chemistry, organocerium chemistry, organotantalum chemistry, organotungsten chemistry, organorhenium chemistry, organoosmium chemistry, organoiridium chemistry, organoplatinum chemistry, organogold chemistry, organomercury chemistry, organothallium chemistry, organolead chemistry, organobismuth chemistry, organopolonium chemistry
Period 7 elements: organoactinide chemistry, organothorium chemistry, organouranium chemistry, organoneptunium chemistry
Industrial applications
Organometallic compounds find wide use in commercial reactions, both as homogenous catalysts and as stoichiometric reagents. For instance, organolithium, organomagnesium, and organoaluminium compounds, examples of which are highly basic and highly reducing, are useful stoichiometrically but also catalyze many polymerization reactions.
Almost all processes involving carbon monoxide rely on catalysts, notable examples being described as carbonylations. The production of acetic acid from methanol and carbon monoxide is catalyzed via metal carbonyl complexes in the Monsanto process and Cativa process. Most synthetic aldehydes are produced via hydroformylation. The bulk of the synthetic alcohols, at least those larger than ethanol, are produced by hydrogenation of hydroformylation-derived aldehydes. Similarly, the Wacker process is used in the oxidation of ethylene to acetaldehyde.
Almost all industrial processes involving alkene-derived polymers rely on organometallic catalysts. The world's polyethylene and polypropylene are produced via both heterogeneously via Ziegler–Natta catalysis and homogeneously, e.g., via constrained geometry catalysts.
Most processes involving hydrogen rely on metal-based catalysts. Whereas bulk hydrogenations (e.g., margarine production) rely on heterogeneous catalysts, for the production of fine chemicals such hydrogenations rely on soluble (homogenous) organometallic complexes or involve organometallic intermediates. Organometallic complexes allow these hydrogenations to be effected asymmetrically.
Many semiconductors are produced from trimethylgallium, trimethylindium, trimethylaluminium, and trimethylantimony. These volatile compounds are decomposed along with ammonia, arsine, phosphine and related hydrides on a heated substrate via metalorganic vapor phase epitaxy (MOVPE) process in the production of light-emitting diodes (LEDs).
Organometallic reactions
Organometallic compounds undergo several important reactions:
associative and dissociative substitution
oxidative addition and reductive elimination
transmetalation
migratory insertion
β-hydride elimination
electron transfer
carbon-hydrogen bond activation
carbometalation
hydrometalation
cyclometalation
nucleophilic abstraction
The synthesis of many organic molecules are facilitated by organometallic complexes. Sigma-bond metathesis is a synthetic method for forming new carbon-carbon sigma bonds. Sigma-bond metathesis is typically used with early transition-metal complexes that are in their highest oxidation state. Using transition-metals that are in their highest oxidation state prevents other reactions from occurring, such as oxidative addition. In addition to sigma-bond metathesis, olefin metathesis is used to synthesize various carbon-carbon pi bonds. Neither sigma-bond metathesis or olefin metathesis change the oxidation state of the metal. Many other methods are used to form new carbon-carbon bonds, including beta-hydride elimination and insertion reactions.
Catalysis
Organometallic complexes are commonly used in catalysis. Major industrial processes include hydrogenation, hydrosilylation, hydrocyanation, olefin metathesis, alkene polymerization, alkene oligomerization, hydrocarboxylation, methanol carbonylation, and hydroformylation. Organometallic intermediates are also invoked in many heterogeneous catalysis processes, analogous to those listed above. Additionally, organometallic intermediates are assumed for Fischer–Tropsch process.
Organometallic complexes are commonly used in small-scale fine chemical synthesis as well, especially in cross-coupling reactions that form carbon-carbon bonds, e.g. Suzuki-Miyaura coupling, Buchwald-Hartwig amination for producing aryl amines from aryl halides, and Sonogashira coupling, etc.
Environmental concerns
Natural and contaminant organometallic compounds are found in the environment. Some that are remnants of human use, such as organolead and organomercury compounds, are toxicity hazards. Tetraethyllead was prepared for use as a gasoline additive but has fallen into disuse because of lead's toxicity. Its replacements are other organometallic compounds, such as ferrocene and methylcyclopentadienyl manganese tricarbonyl (MMT). The organoarsenic compound roxarsone is a controversial animal feed additive. In 2006, approximately one million kilograms of it were produced in the U.S alone. Organotin compounds were once widely used in anti-fouling paints but have since been banned due to environmental concerns.
See also
Bioorganometallic chemistry
Metal carbon dioxide complex
References
Sources
External links
MIT OpenCourseWare: Organometallic Chemistry
Rob Toreki's Organometallic HyperTextbook
web listing of US chemists who specialize in organometallic chemistry . | Organometallic chemistry | [
"Chemistry"
] | 3,728 | [
"Organometallic chemistry"
] |
22,657 | https://en.wikipedia.org/wiki/Order%20of%20magnitude | Order of magnitude is a concept used to discuss the scale of numbers in relation to one another.
Two numbers are "within an order of magnitude" of each other if their ratio is between 1/10 and 10. In other words, the two numbers are within about a factor of 10 of each other.
For example, 1 and 1.02 are within an order of magnitude. So are 1 and 2, 1 and 9, or 1 and 0.2. However, 1 and 15 are not within an order of magnitude, since their ratio is 15/1 = 15 > 10. The reciprocal ratio, 1/15, is less than 0.1, so the same result is obtained.
Differences in order of magnitude can be measured on a base-10 logarithmic scale in "decades" (i.e., factors of ten). For example, there is one order of magnitude between 2 and 20, and two orders of magnitude between 2 and 200. Each division or multiplication by 10 is called an order of magnitude.
This phrasing helps quickly express the difference in scale between 2 and 2,000,000: they differ by 6 orders of magnitude.
Examples of numbers of different magnitudes can be found at Orders of magnitude (numbers).
Below are examples of different methods of partitioning the real numbers into specific "orders of magnitude" for various purposes. There is not one single accepted way of doing this, and different partitions may be easier to compute but less useful for approximation, or better for approximation but more difficult to compute.
Calculating the order of magnitude
Generally, the order of magnitude of a number is the smallest power of 10 used to represent that number. To work out the order of magnitude of a number , the number is first expressed in the following form:
where , or approximately . Then, represents the order of magnitude of the number. The order of magnitude can be any integer. The table below enumerates the order of magnitude of some numbers using this definition:
The geometric mean of and is , meaning that a value of exactly (i.e., ) represents a geometric halfway point within the range of possible values of .
Some use a simpler definition where . This definition has the effect of lowering the values of slightly:
Uses
Orders of magnitude are used to make approximate comparisons. If numbers differ by one order of magnitude, x is about ten times different in quantity than y. If values differ by two orders of magnitude, they differ by a factor of about 100. Two numbers of the same order of magnitude have roughly the same scale: the larger value is less than ten times the smaller value.
The growing amounts of Internet data have led to addition of new SI prefixes over time, most recently in 2022.
Calculating the order of magnitude by truncation
The order of magnitude of a number is, intuitively speaking, the number of powers of 10 contained in the number. More precisely, the order of magnitude of a number can be defined in terms of the common logarithm, usually as the integer part of the logarithm, obtained by truncation. For example, the number has a logarithm (in base 10) of 6.602; its order of magnitude is 6. When truncating, a number of this order of magnitude is between 106 and 107. In a similar example, with the phrase "seven-figure income", the order of magnitude is the number of figures minus one, so it is very easily determined without a calculator to be 6. An order of magnitude is an approximate position on a logarithmic scale.
Order-of-magnitude estimate
An order-of-magnitude estimate of a variable, whose precise value is unknown, is an estimate rounded to the nearest power of ten. For example, an order-of-magnitude estimate for a variable between about 3 billion and 30 billion (such as the human population of the Earth) is 10 billion. To round a number to its nearest order of magnitude, one rounds its logarithm to the nearest integer. Thus , which has a logarithm (in base 10) of 6.602, has 7 as its nearest order of magnitude, because "nearest" implies rounding rather than truncation. For a number written in scientific notation, this logarithmic rounding scale requires rounding up to the next power of ten when the multiplier is greater than the square root of ten (about 3.162). For example, the nearest order of magnitude for is 8, whereas the nearest order of magnitude for is 9. An order-of-magnitude estimate is sometimes also called a zeroth order approximation.
Non-decimal orders of magnitude
An order of magnitude is an approximation of the logarithm of a value relative to some contextually understood reference value, usually 10, interpreted as the base of the logarithm and the representative of values of magnitude one. Logarithmic distributions are common in nature and considering the order of magnitude of values sampled from such a distribution can be more intuitive. When the reference value is 10, the order of magnitude can be understood as the number of digits minus one in the base-10 representation of the value. Similarly, if the reference value is one of some powers of 2 since computers store data in a binary format, the magnitude can be understood in terms of the amount of computer memory needed to store that value.
Irrational orders of magnitude
Other orders of magnitude may be calculated using bases other than integers. In the field of astronomy, the nighttime brightnesses of celestial bodies are ranked by "magnitudes" in which each increasing level is brighter by a factor of greater than the previous level. Thus, a level being 5 magnitudes brighter than another indicates that it is a factor of times brighter: that is, two base 10 orders of magnitude.
This series of magnitudes forms a logarithmic scale with a base of .
Base 1,000,000 orders of magnitude
The different decimal numeral systems of the world use a larger base to better envision the size of the number, and have created names for the powers of this larger base. The table shows what number the order of magnitude aim at for base 10 and for base . It can be seen that the order of magnitude is included in the number name in this example, because bi- means 2, tri- means 3, etc. (these make sense in the long scale only), and the suffix -illion tells that the base is . But the number names billion, trillion themselves (here with other meaning than in the first chapter) are not names of the orders of magnitudes, they are names of "magnitudes", that is the numbers etc.
SI units in the table at right are used together with SI prefixes, which were devised with mainly base 1000 magnitudes in mind. The IEC standard prefixes with base 1024 were invented for use in electronic technology.
See also
Big O notation
Decibel
Mathematical operators and symbols in Unicode
Names of large numbers
Names of small numbers
Number sense
Orders of magnitude (acceleration)
Orders of magnitude (area)
Orders of magnitude (bit rate)
Orders of magnitude (current)
Orders of magnitude (data)
Orders of magnitude (energy)
Orders of magnitude (force)
Orders of magnitude (frequency)
Orders of magnitude (illuminance)
Orders of magnitude (length)
Orders of magnitude (mass)
Orders of magnitude (numbers)
Orders of magnitude (power)
Orders of magnitude (pressure)
Orders of magnitude (radiation)
Orders of magnitude (speed)
Orders of magnitude (temperature)
Orders of magnitude (time)
Orders of magnitude (voltage)
Orders of magnitude (volume)
Powers of Ten
Scientific notation
Unicode symbols for CJK Compatibility includes SI Unit symbols
Valuation (algebra), an algebraic generalization of "order of magnitude"
Scale (analytical tool)
References
Further reading
Asimov, Isaac, The Measure of the Universe (1983).
External links
The Scale of the Universe 2 Interactive tool from Planck length 10−35 meters to universe size 1027
Cosmos – an Illustrated Dimensional Journey from microcosmos to macrocosmos – from Digital Nature Agency
Powers of 10, a graphic animated illustration that starts with a view of the Milky Way at 1023 meters and ends with subatomic particles at 10−16 meters.
What is Order of Magnitude?
Elementary mathematics
Logarithmic scales of measurement | Order of magnitude | [
"Physics",
"Mathematics"
] | 1,700 | [
"Physical quantities",
"Quantity",
"Elementary mathematics",
"Logarithmic scales of measurement",
"Orders of magnitude",
"Units of measurement"
] |
22,718 | https://en.wikipedia.org/wiki/Ozone | Ozone () (or trioxygen) is an inorganic molecule with the chemical formula . It is a pale blue gas with a distinctively pungent smell. It is an allotrope of oxygen that is much less stable than the diatomic allotrope , breaking down in the lower atmosphere to (dioxygen). Ozone is formed from dioxygen by the action of ultraviolet (UV) light and electrical discharges within the Earth's atmosphere. It is present in very low concentrations throughout the atmosphere, with its highest concentration high in the ozone layer of the stratosphere, which absorbs most of the Sun's ultraviolet (UV) radiation.
Ozone's odor is reminiscent of chlorine, and detectable by many people at concentrations of as little as in air. Ozone's O3 structure was determined in 1865. The molecule was later proven to have a bent structure and to be weakly diamagnetic. In standard conditions, ozone is a pale blue gas that condenses at cryogenic temperatures to a dark blue liquid and finally a violet-black solid. Ozone's instability with regard to more common dioxygen is such that both concentrated gas and liquid ozone may decompose explosively at elevated temperatures, physical shock, or fast warming to the boiling point. It is therefore used commercially only in low concentrations.
Ozone is a powerful oxidant (far more so than dioxygen) and has many industrial and consumer applications related to oxidation. This same high oxidizing potential, however, causes ozone to damage mucous and respiratory tissues in animals, and also tissues in plants, above concentrations of about . While this makes ozone a potent respiratory hazard and pollutant near ground level, a higher concentration in the ozone layer (from two to eight ppm) is beneficial, preventing damaging UV light from reaching the Earth's surface.
Nomenclature
The trivial name ozone is the most commonly used and preferred IUPAC name. The systematic names 2λ4-trioxidiene and catena-trioxygen, valid IUPAC names, are constructed according to the substitutive and additive nomenclatures, respectively. The name ozone derives from ozein (ὄζειν), the Greek neuter present participle for smell, referring to ozone's distinctive smell.
In appropriate contexts, ozone can be viewed as trioxidane with two hydrogen atoms removed, and as such, trioxidanylidene may be used as a systematic name, according to substitutive nomenclature. By default, these names pay no regard to the radicality of the ozone molecule. In an even more specific context, this can also name the non-radical singlet ground state, whereas the diradical state is named trioxidanediyl.
Trioxidanediyl (or ozonide) is used, non-systematically, to refer to the substituent group (-OOO-). Care should be taken to avoid confusing the name of the group for the context-specific name for the ozone given above.
History
In 1785, Dutch chemist Martinus van Marum was conducting experiments involving electrical sparking above water when he noticed an unusual smell, which he attributed to the electrical reactions, failing to realize that he had in fact created ozone.
A half century later, Christian Friedrich Schönbein noticed the same pungent odour and recognized it as the smell often following a bolt of lightning. In 1839, he succeeded in isolating the gaseous chemical and named it "ozone", from the Greek word () meaning "to smell".
For this reason, Schönbein is generally credited with the discovery of ozone. He also noted the similarity of ozone smell to the smell of phosphorus, and in 1844 proved that the product of reaction of white phosphorus with air is identical. A subsequent effort to call ozone "electrified oxygen" he ridiculed by proposing to call the ozone from white phosphorus "phosphorized oxygen". The formula for ozone, O3, was not determined until 1865 by Jacques-Louis Soret and confirmed by Schönbein in 1867.
For much of the second half of the 19th century and well into the 20th, ozone was considered a healthy component of the environment by naturalists and health-seekers. Beaumont, California, had as its official slogan "Beaumont: Zone of Ozone", as evidenced on postcards and Chamber of Commerce letterhead. Naturalists working outdoors often considered the higher elevations beneficial because of their ozone content which was readily monitored. "There is quite a different atmosphere [at higher elevation] with enough ozone to sustain the necessary energy [to work]", wrote naturalist Henry Henshaw, working in Hawaii. Seaside air was considered to be healthy because of its believed ozone content. The smell giving rise to this belief is in fact that of halogenated seaweed metabolites and dimethyl sulfide.
Much of ozone's appeal seems to have resulted from its "fresh" smell, which evoked associations with purifying properties. Scientists noted its harmful effects. In 1873 James Dewar and John Gray McKendrick documented that frogs grew sluggish, birds gasped for breath, and rabbits' blood showed decreased levels of oxygen after exposure to "ozonized air", which "exercised a destructive action". Schönbein himself reported that chest pains, irritation of the mucous membranes and difficulty breathing occurred as a result of inhaling ozone, and small mammals died. In 1911, Leonard Hill and Martin Flack stated in the Proceedings of the Royal Society B that ozone's healthful effects "have, by mere iteration, become part and parcel of common belief; and yet exact physiological evidence in favour of its good effects has been hitherto almost entirely wanting ... The only thoroughly well-ascertained knowledge concerning the physiological effect of ozone, so far attained, is that it causes irritation and œdema of the lungs, and death if inhaled in relatively strong concentration for any time."
During World War I, ozone was tested at Queen Alexandra Military Hospital in London as a possible disinfectant for wounds. The gas was applied directly to wounds for as long as 15 minutes. This resulted in damage to both bacterial cells and human tissue. Other sanitizing techniques, such as irrigation with antiseptics, were found preferable.
Until the 1920s, it was not certain whether small amounts of oxozone, , were also present in ozone samples due to the difficulty of applying analytical chemistry techniques to the explosive concentrated chemical. In 1923, Georg-Maria Schwab (working for his doctoral thesis under Ernst Hermann Riesenfeld) was the first to successfully solidify ozone and perform accurate analysis which conclusively refuted the oxozone hypothesis. Further hitherto unmeasured physical properties of pure concentrated ozone were determined by the Riesenfeld group in the 1920s.
Physical properties
Ozone is a colourless or pale blue gas, slightly soluble in water and much more soluble in inert non-polar solvents such as carbon tetrachloride or fluorocarbons, in which it forms a blue solution. At , it condenses to form a dark blue liquid. It is dangerous to allow this liquid to warm to its boiling point, because both concentrated gaseous ozone and liquid ozone can detonate. At temperatures below , it forms a violet-black solid.
Ozone has a very specific sharp odour somewhat resembling chlorine bleach. Most people can detect it at the 0.01 μmol/mol level in air. Exposure of 0.1 to 1 μmol/mol produces headaches, burning eyes and causing irritation to the respiratory passages.
Even low concentrations of ozone in air are very destructive to organic materials such as latex, plastics and animal lung tissue.
The ozone molecule is diamagnetic.
Structure
According to experimental evidence from microwave spectroscopy, ozone is a bent molecule, with C2v symmetry (similar to the water molecule). The O–O distances are . The O–O–O angle is 116.78°. The central atom is sp² hybridized with one lone pair. Ozone is a polar molecule with a dipole moment of 0.53 D. The molecule can be represented as a resonance hybrid with two contributing structures, each with a single bond on one side and double bond on the other. The arrangement possesses an overall bond order of 1.5 for both sides. It is isoelectronic with the nitrite anion. Naturally occurring ozone can be composed of substituted isotopes (16O, 17O, 18O). A cyclic form has been predicted but not observed.
Reactions
Ozone is among the most powerful oxidizing agents known, far stronger than . It is also unstable at high concentrations, decaying into ordinary diatomic oxygen. Its half-life varies with atmospheric conditions such as temperature, humidity, and air movement. Under laboratory conditions, the half-life will average ~1500 minutes (25 hours) in still air at room temperature (24 °C), zero humidity with zero air changes per hour.
2 O3 -> 3 O2
This reaction proceeds more rapidly with increasing temperature. Deflagration of ozone can be triggered by a spark and can occur in ozone concentrations of 10 wt% or higher.
Ozone can also be produced from oxygen at the anode of an electrochemical cell. This reaction can create smaller quantities of ozone for research purposes.
This can be observed as an unwanted reaction in a Hoffman gas apparatus during the electrolysis of water when the voltage is set above the necessary voltage.
With metals
Ozone will oxidize most metals (except gold, platinum, and iridium) to oxides of the metals in their highest oxidation state. For example:
With nitrogen and carbon compounds
Ozone also oxidizes nitric oxide to nitrogen dioxide:
NO + O3 -> NO2 + O2
This reaction is accompanied by chemiluminescence. The can be further oxidized to nitrate radical:
NO2 + O3 -> NO3 + O2
The formed can react with to form dinitrogen pentoxide ().
Solid nitronium perchlorate can be made from , and gases:
NO2 + ClO2 + 2 O3 -> NO2ClO4 + 2 O2
Ozone does not react with ammonium salts, but it oxidizes ammonia to ammonium nitrate:
2 NH3 + 4 O3 -> NH4NO3 + 4 O2 + H2O
Ozone reacts with carbon to form carbon dioxide, even at room temperature:
C + 2 O3 -> CO2 + 2 O2
With sulfur compounds
Ozone oxidizes sulfides to sulfates. For example, lead(II) sulfide is oxidized to lead(II) sulfate:
PbS + 4 O3 -> PbSO4 + 4 O2
Sulfuric acid can be produced from ozone, water and either elemental sulfur or sulfur dioxide:
In the gas phase, ozone reacts with hydrogen sulfide to form sulfur dioxide:
H2S + O3 -> SO2 + H2O
In an aqueous solution, however, two competing simultaneous reactions occur, one to produce elemental sulfur, and one to produce sulfuric acid:
With alkenes and alkynes
Alkenes can be oxidatively cleaved by ozone, in a process called ozonolysis, giving alcohols, aldehydes, ketones, and carboxylic acids, depending on the second step of the workup.
Ozone can also cleave alkynes to form an acid anhydride or diketone product. If the reaction is performed in the presence of water, the anhydride hydrolyzes to give two carboxylic acids.
Usually ozonolysis is carried out in a solution of dichloromethane, at a temperature of −78 °C. After a sequence of cleavage and rearrangement, an organic ozonide is formed. With reductive workup (e.g. zinc in acetic acid or dimethyl sulfide), ketones and aldehydes will be formed, with oxidative workup (e.g. aqueous or alcoholic hydrogen peroxide), carboxylic acids will be formed.
Other substrates
All three atoms of ozone may also react, as in the reaction of tin(II) chloride with hydrochloric acid and ozone:
3 SnCl2 + 6 HCl + O3 -> 3 SnCl4 + 3 H2O
Iodine perchlorate can be made by treating iodine dissolved in cold anhydrous perchloric acid with ozone:
I2 + 6 HClO4 + O3 -> 2 I(ClO4)3 + 3 H2O
Ozone could also react with potassium iodide to give oxygen and iodine gas that can be titrated for quantitative determination:
2KI + O3 + H2O -> 2KOH + O2 + I2
Combustion
Ozone can be used for combustion reactions and combustible gases; ozone provides higher temperatures than burning in dioxygen (). The following is a reaction for the combustion of carbon subnitride which can also cause higher temperatures:
3 C4N2 + 4 O3 -> 12 CO + 3 N2
Ozone can react at cryogenic temperatures. At , atomic hydrogen reacts with liquid ozone to form a hydrogen superoxide radical, which dimerizes:
Ozone decomposition
Types of ozone decomposition
Ozone is a toxic substance, commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers...) and its catalytic decomposition is very important to reduce pollution. This type of decomposition is the most widely used, especially with solid catalysts, and it has many advantages such as a higher conversion with a lower temperature. Furthermore, the product and the catalyst can be instantaneously separated, and this way the catalyst can be easily recovered without using any separation operation. Moreover, the most used materials in the catalytic decomposition of ozone in the gas phase are noble metals like Pt, Rh or Pd and transition metals such as Mn, Co, Cu, Fe, Ni or Ag.
There are two other possibilities for the ozone decomposition in gas phase:
The first one is a thermal decomposition where the ozone can be decomposed using only the action of heat. The problem is that this type of decomposition is very slow with temperatures below 250 °C. However, the decomposition rate can be increased working with higher temperatures but this would involve a high energy cost.
The second one is a photochemical decomposition, which consists of radiating ozone with ultraviolet radiation (UV) and it gives rise to oxygen and radical peroxide.
Kinetics of ozone decomposition into molecular oxygen
The process of ozone decomposition is a complex reaction involving two elementary reactions that finally lead to molecular oxygen, and this means that the reaction order and the rate law cannot be determined by the stoichiometry of the fitted equation.
Overall reaction: 2 O3 -> 3 O2
Rate law (observed):
It has been determined that the ozone decomposition follows a first order kinetics, and from the rate law above it can be determined that the partial order respect to molecular oxygen is -1 and respect to ozone is 2, therefore the global reaction order is 1.
The ozone decomposition consists of two elementary steps: The first one corresponds to a unimolecular reaction because one only molecule of ozone decomposes into two products (molecular oxygen and oxygen). Then, the oxygen from the first step is an intermediate because it participates as a reactant in the second step, which is a bimolecular reaction because there are two different reactants (ozone and oxygen) that give rise to one product, that corresponds to molecular oxygen in the gas phase.
Step 1: Unimolecular reaction O3 -> O2 + O
Step 2: Bimolecular reaction O3 + O -> 2 O2
These two steps have different reaction rates, the first one is reversible and faster than the second reaction, which is slower, so this means that the determining step is the second reaction and this is used to determine the observed reaction rate. The reaction rate laws for every step are the ones that follow:
The following mechanism allows to explain the rate law of the ozone decomposition observed experimentally, and also it allows to determine the reaction orders with respect to ozone and oxygen, with which the overall reaction order will be determined. The slower step, the bimolecular reaction, is the one that determines the rate of product formation, and considering that this step gives rise to two oxygen molecules the rate law has this form:
However, this equation depends on the concentration of oxygen (intermediate), which can be determined considering the first step. Since the first step is faster and reversible and the second step is slower, the reactants and products from the first step are in equilibrium, so the concentration of the intermediate can be determined as follows:
Then using these equations, the formation rate of molecular oxygen is as shown below:
Finally, the mechanism presented allows to establish the rate observed experimentally, with a rate constant () and corresponding to a first order kinetics, as follows:
where
Reduction to ozonides
Reduction of ozone gives the ozonide anion, . Derivatives of this anion are explosive and must be stored at cryogenic temperatures. Ozonides for all the alkali metals are known. , and can be prepared from their respective superoxides:
KO2 + O3 -> KO3 + O2
Although KO3 can be formed as above, it can also be formed from potassium hydroxide and ozone:
2 KOH + 5 O3 -> 2 KO3 + 5 O2 + H2O
and must be prepared by action of in liquid on an ion-exchange resin containing or ions:
CsO3 + Na+ -> Cs+ + NaO3
A solution of calcium in ammonia reacts with ozone to give ammonium ozonide and not calcium ozonide:
Applications
Ozone can be used to remove iron and manganese from water, forming a precipitate which can be filtered:
Ozone will also oxidize dissolved hydrogen sulfide in water to sulfurous acid:
3 O3 + H2S -> H2SO3 + 3 O2
These three reactions are central in the use of ozone-based well water treatment.
Ozone will also detoxify cyanides by converting them to cyanates.
CN- + O3 -> CNO- + O2
Ozone will also completely decompose urea:
(NH2)2CO + O3 -> N2 + CO2 + 2 H2O
Spectroscopic properties
Ozone is a bent triatomic molecule with three vibrational modes: the symmetric stretch (1103.157 cm−1), bend (701.42 cm−1) and antisymmetric stretch (1042.096 cm−1). The symmetric stretch and bend are weak absorbers, but the antisymmetric stretch is strong and responsible for ozone being an important minor greenhouse gas. This IR band is also used to detect ambient and atmospheric ozone although UV-based measurements are more common.
The electromagnetic spectrum of ozone is quite complex. An overview can be seen at the MPI Mainz UV/VIS Spectral Atlas of Gaseous Molecules of Atmospheric Interest.
All of the bands are dissociative, meaning that the molecule falls apart to after absorbing a photon. The most important absorption is the Hartley band, extending from slightly above 300 nm down to slightly above 200 nm. It is this band that is responsible for absorbing UV C in the stratosphere.
On the high wavelength side, the Hartley band transitions to the so-called Huggins band, which falls off rapidly until disappearing by ~360 nm. Above 400 nm, extending well out into the NIR, are the Chappius and Wulf bands. There, unstructured absorption bands are useful for detecting high ambient concentrations of ozone, but are so weak that they do not have much practical effect.
There are additional absorption bands in the far UV, which increase slowly from 200 nm down to reaching a maximum at ~120 nm.
Ozone in Earth's atmosphere
The standard way to express total ozone levels (the amount of ozone in a given vertical column) in the atmosphere is by using Dobson units. Point measurements are reported as mole fractions in nmol/mol (parts per billion, ppb) or as concentrations in μg/m3. The study of ozone concentration in the atmosphere started in the 1920s.
Ozone layer
Location and production
The highest levels of ozone in the atmosphere are in the stratosphere, in a region also known as the ozone layer between about 10 and 50 km above the surface (or between about 6 and 31 miles). However, even in this "layer", the ozone concentrations are only two to eight parts per million, so most of the oxygen there is dioxygen, O2, at about 210,000 parts per million by volume.
Ozone in the stratosphere is mostly produced from short-wave ultraviolet rays between 240 and 160 nm. Oxygen starts to absorb weakly at 240 nm in the Herzberg bands, but most of the oxygen is dissociated by absorption in the strong Schumann–Runge bands between 200 and 160 nm where ozone does not absorb. While shorter wavelength light, extending to even the X-Ray limit, is energetic enough to dissociate molecular oxygen, there is relatively little of it, and, the strong solar emission at Lyman-alpha, 121 nm, falls at a point where molecular oxygen absorption is a minimum.
The process of ozone creation and destruction is called the Chapman cycle and starts with the photolysis of molecular oxygen
O2 -> [\ce{photon}] [(\ce{radiation}\ \lambda\ <\ 240\ \ce{nm})] 2O
followed by reaction of the oxygen atom with another molecule of oxygen to form ozone.
O + O2 + M -> O3 + M
where "M" denotes the third body that carries off the excess energy of the reaction. The ozone molecule can then absorb a UV-C photon and dissociate
The excess kinetic energy heats the stratosphere when the O atoms and the molecular oxygen fly apart and collide with other molecules. This conversion of UV light into kinetic energy warms the stratosphere. The oxygen atoms produced in the photolysis of ozone then react back with other oxygen molecule as in the previous step to form more ozone. In the clear atmosphere, with only nitrogen and oxygen, ozone can react with the atomic oxygen to form two molecules of :
O3 + O -> 2 O2
An estimate of the rate of this termination step to the cycling of atomic oxygen back to ozone can be found simply by taking the ratios of the concentration of O2 to O3. The termination reaction is catalysed by the presence of certain free radicals, of which the most important are hydroxyl (OH), nitric oxide (NO) and atomic chlorine (Cl) and bromine (Br). In the second half of the 20th century, the amount of ozone in the stratosphere was discovered to be declining, mostly because of increasing concentrations of chlorofluorocarbons (CFC) and similar chlorinated and brominated organic molecules. The concern over the health effects of the decline led to the 1987 Montreal Protocol, the ban on the production of many ozone depleting chemicals and in the first and second decade of the 21st century the beginning of the recovery of stratospheric ozone concentrations.
Importance to surface-dwelling life on Earth
Ozone in the ozone layer filters out sunlight wavelengths from about 200 nm UV rays to 315 nm, with ozone peak absorption at about 250 nm. This ozone UV absorption is important to life, since it extends the absorption of UV by ordinary oxygen and nitrogen in air (which absorb all wavelengths < 200 nm) through the lower UV-C (200–280 nm) and the entire UV-B band (280–315 nm). The small unabsorbed part that remains of UV-B after passage through ozone causes sunburn in humans, and direct DNA damage in living tissues in both plants and animals. Ozone's effect on mid-range UV-B rays is illustrated by its effect on UV-B at 290 nm, which has a radiation intensity 350 million times as powerful at the top of the atmosphere as at the surface. Nevertheless, enough of UV-B radiation at similar frequency reaches the ground to cause some sunburn, and these same wavelengths are also among those responsible for the production of vitamin D in humans.
The ozone layer has little effect on the longer UV wavelengths called UV-A (315–400 nm), but this radiation does not cause sunburn or direct DNA damage. While UV-A probably does cause long-term skin damage in certain humans, it is not as dangerous to plants and to the health of surface-dwelling organisms on Earth in general (see ultraviolet for more information on near ultraviolet).
Low level ozone
Low level ozone (or tropospheric ozone) is an atmospheric pollutant. It is not emitted directly by car engines or by industrial operations, but formed by the reaction of sunlight on air containing hydrocarbons and nitrogen oxides that react to form ozone directly at the source of the pollution or many kilometers downwind.
Ozone reacts directly with some hydrocarbons such as aldehydes and thus begins their removal from the air, but the products are themselves key components of smog. Ozone photolysis by UV light leads to production of the hydroxyl radical HO• and this plays a part in the removal of hydrocarbons from the air, but is also the first step in the creation of components of smog such as peroxyacyl nitrates, which can be powerful eye irritants. The atmospheric lifetime of tropospheric ozone is about 22 days; its main removal mechanisms are being deposited to the ground, the above-mentioned reaction giving HO•, and by reactions with OH and the peroxy radical HO2•.
There is evidence of significant reduction in agricultural yields because of increased ground-level ozone and pollution which interferes with photosynthesis and stunts overall growth of some plant species. The United States Environmental Protection Agency (EPA) has proposed a secondary regulation to reduce crop damage, in addition to the primary regulation designed for the protection of human health.
Low level ozone in urban areas
Certain examples of cities with elevated ozone readings are Denver, Colorado; Houston, Texas; and Mexico City, Mexico. Houston has a reading of around 41 nmol/mol, while Mexico City is far more hazardous, with a reading of about 125 nmol/mol.
Low level ozone, or tropospheric ozone, is the most concerning type of ozone pollution in urban areas and is increasing in general. Ozone pollution in urban areas affects denser populations, and is worsened by high populations of vehicles, which emit pollutants NO2 and VOCs, the main contributors to problematic ozone levels. Ozone pollution in urban areas is especially concerning with increasing temperatures, raising heat-related mortality during heat waves. During heat waves in urban areas, ground level ozone pollution can be 20% higher than usual. Ozone pollution in urban areas reaches higher levels of exceedance in the summer and autumn, which may be explained by weather patterns and traffic patterns. People experiencing poverty are more affected by pollution in general, even though these populations are less likely to be contributing to pollution levels.
As mentioned above, Denver, Colorado, is one of the many cities in the U.S. that have high amounts of ozone. According to the American Lung Association, the Denver–Aurora area is the 14th most ozone-polluted area in the U.S. The problem of high ozone levels is not new to this area. In 2004, the EPA allotted the Denver Metro/North Front Range as non-attainment areas per 1997's 8-hour ozone standard, but later deferred this status until 2007. The non-attainment standard indicates that an area does not meet the EPA's air quality standards. The Colorado Ozone Action Plan was created in response, and numerous changes were implemented from this plan. The first major change was that car emission testing was expanded across the state to more counties that did not previously mandate emissions testing, like areas of Larimer and Weld County. There have also been changes made to decrease Nitrogen Oxides (NOx) and Volatile Organic Compound (VOC) emissions, which should help lower ozone levels.
One large contributor to high ozone levels in the area is the oil and natural gas industry situated in the Denver-Julesburg Basin (DJB) which overlaps with a majority of Colorado's metropolitan areas. Ozone is created naturally in the Earth's stratosphere, but is also created in the troposphere from human efforts. Briefly mentioned above, NOx and VOCs react with sunlight to create ozone through a process called photochemistry. One hour elevated ozone events (<75 ppb) "occur during June–August indicating that elevated ozone levels are driven by regional photochemistry". According to an article from the University of Colorado-Boulder, "Oil and natural gas VOC emission have a major role in ozone production and bear the potential to contribute to elevated O3 levels in the Northern Colorado Front Range (NCFR)". Using complex analyses to research wind patterns and emissions from large oil and natural gas operations, the authors concluded that "elevated O3 levels in the NCFR are predominantly correlated with air transport from N– ESE, which are the upwind sectors where the O&NG operations in the Wattenberg Field area of the DJB are located".
Contained in the Colorado Ozone Action Plan, created in 2008, plans exist to evaluate "emission controls for large industrial sources of NOx" and "statewide control requirements for new oil and gas condensate tanks and pneumatic valves". In 2011, the Regional Haze Plan was released that included a more specific plan to help decrease NOx emissions. These efforts are increasingly difficult to implement and take many years to come to pass. Of course there are also other reasons that ozone levels remain high. These include: a growing population meaning more car emissions, and the mountains along the NCFR that can trap emissions. If interested, daily air quality readings can be found at the Colorado Department of Public Health and Environment's website. As noted earlier, Denver continues to experience high levels of ozone to this day. It will take many years and a systems-thinking approach to combat this issue of high ozone levels in the Front Range of Colorado.
Ozone cracking
Ozone gas attacks any polymer possessing olefinic or double bonds within its chain structure, such as natural rubber, nitrile rubber, and styrene-butadiene rubber. Products made using these polymers are especially susceptible to attack, which causes cracks to grow longer and deeper with time, the rate of crack growth depending on the load carried by the rubber component and the concentration of ozone in the atmosphere. Such materials can be protected by adding antiozonants, such as waxes, which bond to the surface to create a protective film or blend with the material and provide long term protection. Ozone cracking used to be a serious problem in car tires, for example, but it is not an issue with modern tires. On the other hand, many critical products, like gaskets and O-rings, may be attacked by ozone produced within compressed air systems. Fuel lines made of reinforced rubber are also susceptible to attack, especially within the engine compartment, where some ozone is produced by electrical components. Storing rubber products in close proximity to a DC electric motor can accelerate ozone cracking. The commutator of the motor generates sparks which in turn produce ozone.
Ozone as a greenhouse gas
Although ozone was present at ground level before the Industrial Revolution, peak concentrations are now far higher than the pre-industrial levels, and even background concentrations well away from sources of pollution are substantially higher. Ozone acts as a greenhouse gas, absorbing some of the infrared energy emitted by the earth. Quantifying the greenhouse gas potency of ozone is difficult because it is not present in uniform concentrations across the globe. However, the most widely accepted scientific assessments relating to climate change (e.g. the Intergovernmental Panel on Climate Change Third Assessment Report) suggest that the radiative forcing of tropospheric ozone is about 25% that of carbon dioxide.
The annual global warming potential of tropospheric ozone is between 918 and 1022 tons carbon dioxide equivalent/tons tropospheric ozone. This means on a per-molecule basis, ozone in the troposphere has a radiative forcing effect roughly 1,000 times as strong as carbon dioxide. However, tropospheric ozone is a short-lived greenhouse gas, which decays in the atmosphere much more quickly than carbon dioxide. This means that over a 20-year span, the global warming potential of tropospheric ozone is much less, roughly 62 to 69 tons carbon dioxide equivalent / ton tropospheric ozone.
Because of its short-lived nature, tropospheric ozone does not have strong global effects, but has very strong radiative forcing effects on regional scales. In fact, there are regions of the world where tropospheric ozone has a radiative forcing up to 150% of carbon dioxide. For example, ozone increase in the troposphere is shown to be responsible for ~30% of upper Southern Ocean interior warming between 1955 and 2000.
Health effects
For the last few decades, scientists studied the effects of acute and chronic ozone exposure on human health. Hundreds of studies suggest that ozone is harmful to people at levels currently found in urban areas. Ozone has been shown to affect the respiratory, cardiovascular and central nervous system. Early death and problems in reproductive health and development are also shown to be associated with ozone exposure.
Vulnerable populations
The American Lung Association has identified five populations who are especially vulnerable to the effects of breathing ozone:
Children and teens
People 65 years old and older
People who work or exercise outdoors
People with existing lung diseases, such as asthma and chronic obstructive pulmonary disease (also known as COPD, which includes emphysema and chronic bronchitis)
People with cardiovascular disease
Additional evidence suggests that women, those with obesity and low-income populations may also face higher risk from ozone, although more research is needed.
Acute ozone exposure
Acute ozone exposure ranges from hours to a few days. Because ozone is a gas, it directly affects the lungs and the entire respiratory system. Inhaled ozone causes inflammation and acute—but reversible—changes in lung function, as well as airway hyperresponsiveness. These changes lead to shortness of breath, wheezing, and coughing which may exacerbate lung diseases, like asthma or chronic obstructive pulmonary disease (COPD) resulting in the need to receive medical treatment. Acute and chronic exposure to ozone has been shown to cause an increased risk of respiratory infections, due to the following mechanism.
Multiple studies have been conducted to determine the mechanism behind ozone's harmful effects, particularly in the lungs. These studies have shown that exposure to ozone causes changes in the immune response within the lung tissue, resulting in disruption of both the innate and adaptive immune response, as well as altering the protective function of lung epithelial cells. It is thought that these changes in immune response and the related inflammatory response are factors that likely contribute to the increased risk of lung infections, and worsening or triggering of asthma and reactive airways after exposure to ground-level ozone pollution.
The innate (cellular) immune system consists of various chemical signals and cell types that work broadly and against multiple pathogen types, typically bacteria or foreign bodies/substances in the host. The cells of the innate system include phagocytes, neutrophils, both thought to contribute to the mechanism of ozone pathology in the lungs, as the functioning of these cell types have been shown to change after exposure to ozone. Macrophages, cells that serve the purpose of eliminating pathogens or foreign material through the process of "phagocytosis", have been shown to change the level of inflammatory signals they release in response to ozone, either up-regulating and resulting in an inflammatory response in the lung, or down-regulating and reducing immune protection. Neutrophils, another important cell type of the innate immune system that primarily targets bacterial pathogens, are found to be present in the airways within 6 hours of exposure to high ozone levels. Despite high levels in the lung tissues, however, their ability to clear bacteria appears impaired by exposure to ozone.
The adaptive immune system is the branch of immunity that provides long-term protection via the development of antibodies targeting specific pathogens and is also impacted by high ozone exposure. Lymphocytes, a cellular component of the adaptive immune response, produce an increased amount of inflammatory chemicals called "cytokines" after exposure to ozone, which may contribute to airway hyperreactivity and worsening asthma symptoms.
The airway epithelial cells also play an important role in protecting individuals from pathogens. In normal tissue, the epithelial layer forms a protective barrier, and also contains specialized ciliary structures that work to clear foreign bodies, mucus and pathogens from the lungs. When exposed to ozone, the cilia become damaged and mucociliary clearance of pathogens is reduced. Furthermore, the epithelial barrier becomes weakened, allowing pathogens to cross the barrier, proliferate and spread into deeper tissues. Together, these changes in the epithelial barrier help make individuals more susceptible to pulmonary infections.
Inhaling ozone not only affects the immune system and lungs, but it may also affect the heart as well. Ozone causes short-term autonomic imbalance leading to changes in heart rate and reduction in heart rate variability; and high levels exposure for as little as one-hour results in a supraventricular arrhythmia in the elderly, both increase the risk of premature death and stroke. Ozone may also lead to vasoconstriction resulting in increased systemic arterial pressure contributing to increased risk of cardiac morbidity and mortality in patients with pre-existing cardiac diseases.
Chronic ozone exposure
Breathing ozone for periods longer than eight hours at a time for weeks, months or years defines chronic exposure. Numerous studies suggest a serious impact on the health of various populations from this exposure.
One study finds significant positive associations between chronic ozone and all-cause, circulatory, and respiratory mortality with 2%, 3%, and 12% increases in risk per 10 ppb and report an association (95% CI) of annual ozone and all-cause mortality with a hazard ratio of 1.02 (1.01–1.04), and with cardiovascular mortality of 1.03 (1.01–1.05). A similar study finds similar associations with all-cause mortality and even larger effects for cardiovascular mortality. An increased risk of mortality from respiratory causes is associated with long-term chronic exposure to ozone.
Chronic ozone has detrimental effects on children, especially those with asthma. The risk for hospitalization in children with asthma increases with chronic exposure to ozone; younger children and those with low-income status are even at greater risk.
Adults suffering from respiratory diseases (asthma, COPD, lung cancer) are at a higher risk of mortality and morbidity and critically ill patients have an increased risk of developing acute respiratory distress syndrome with chronic ozone exposure as well.
Ozone produced by air cleaners
Ozone generators sold as air cleaners intentionally produce the gas ozone. These are often marketed to control indoor air pollution, and use misleading terms to describe ozone. Some examples are describing it as "energized oxygen" or "pure air", suggesting that ozone is a healthy or "better" kind of oxygen. However, according to the EPA, "There is evidence to show that at concentrations that do not exceed public health standards, ozone is not effective at removing many odor-causing chemicals", and "If used at concentrations that do not exceed public health standards, ozone applied to indoor air does not effectively remove viruses, bacteria, mold, or other biological pollutants.". Furthermore, another report states that "results of some controlled studies show that concentrations of ozone considerably higher than these [human safety] standards are possible even when a user follows the manufacturer's operating instructions".
The California Air Resources Board has a page listing air cleaners (many with ionizers) meeting their indoor ozone limit of 0.050 parts per million. From that article:
All portable indoor air cleaning devices sold in California must be certified by the California Air Resources Board (CARB). To be certified, air cleaners must be tested for electrical safety and ozone emissions, and meet an ozone emission concentration limit of 0.050 parts per million. For more information about the regulation, visit the air cleaner regulation.
Ozone air pollution
Ozone precursors are a group of pollutants, predominantly those emitted during the combustion of fossil fuels. Ground-level ozone pollution (tropospheric ozone) is created near the Earth's surface by the action of daylight UV rays on these precursors. The ozone at ground level is primarily from fossil fuel precursors, but methane is a natural precursor, and the very low natural background level of ozone at ground level is considered safe. This section examines the health impacts of fossil fuel burning, which raises ground level ozone far above background levels.
There is a great deal of evidence to show that ground-level ozone can harm lung function and irritate the respiratory system. Exposure to ozone (and the pollutants that produce it) is linked to premature death, asthma, bronchitis, heart attack, and other cardiopulmonary problems.
Long-term exposure to ozone has been shown to increase risk of death from respiratory illness. A study of 450,000 people living in U.S. cities saw a significant correlation between ozone levels and respiratory illness over the 18-year follow-up period. The study revealed that people living in cities with high ozone levels, such as Houston or Los Angeles, had an over 30% increased risk of dying from lung disease.
Air quality guidelines such as those from the World Health Organization, the U.S. Environmental Protection Agency (EPA), and the European Union are based on detailed studies designed to identify the levels that can cause measurable ill health effects.
According to scientists with the EPA, susceptible people can be adversely affected by ozone levels as low as 40 nmol/mol. In the EU, the current target value for ozone concentrations is 120 μg/m3 which is about 60 nmol/mol. This target applies to all member states in accordance with Directive 2008/50/EC. Ozone concentration is measured as a maximum daily mean of 8 hour averages and the target should not be exceeded on more than 25 calendar days per year, starting from January 2010. Whilst the directive requires in the future a strict compliance with 120 μg/m3 limit (i.e. mean ozone concentration not to be exceeded on any day of the year), there is no date set for this requirement and this is treated as a long-term objective.
In the US, the Clean Air Act directs the EPA to set National Ambient Air Quality Standards for several pollutants, including ground-level ozone, and counties out of compliance with these standards are required to take steps to reduce their levels. In May 2008, under a court order, the EPA lowered its ozone standard from 80 nmol/mol to 75 nmol/mol. The move proved controversial, since the Agency's own scientists and advisory board had recommended lowering the standard to 60 nmol/mol. Many public health and environmental groups also supported the 60 nmol/mol standard, and the World Health Organization recommends 100 μg/m3 (51 nmol/mol).
On January 7, 2010, the U.S. Environmental Protection Agency (EPA) announced proposed revisions to the National Ambient Air Quality Standard (NAAQS) for the pollutant ozone, the principal component of smog:
... EPA proposes that the level of the 8-hour primary standard, which was set at 0.075 μmol/mol in the 2008 final rule, should instead be set at a lower level within the range of 0.060 to 0.070 μmol/mol, to provide increased protection for children and other at risk populations against an array of – related adverse health effects that range from decreased lung function and increased respiratory symptoms to serious indicators of respiratory morbidity including emergency department visits and hospital admissions for respiratory causes, and possibly cardiovascular-related morbidity as well as total non- accidental and cardiopulmonary mortality ...
On October 26, 2015, the EPA published a final rule with an effective date of December 28, 2015, that revised the 8-hour primary NAAQS from 0.075 ppm to 0.070 ppm.
The EPA has developed an air quality index (AQI) to help explain air pollution levels to the general public. Under the current standards, eight-hour average ozone mole fractions of 85 to 104 nmol/mol are described as "unhealthy for sensitive groups", 105 nmol/mol to 124 nmol/mol as "unhealthy", and 125 nmol/mol to 404 nmol/mol as "very unhealthy".
Ozone can also be present in indoor air pollution, partly as a result of electronic equipment such as photocopiers. A connection has also been known to exist between the increased pollen, fungal spores, and ozone caused by thunderstorms and hospital admissions of asthma sufferers.
In the Victorian era, one British folk myth held that the smell of the sea was caused by ozone. In fact, the characteristic "smell of the sea" is caused by dimethyl sulfide, a chemical generated by phytoplankton. Victorian Britons considered the resulting smell "bracing".
Heat waves
An investigation to assess the joint mortality effects of ozone and heat during the European heat waves in 2003, concluded that these appear to be additive.
Physiology
Ozone, along with reactive forms of oxygen such as superoxide, singlet oxygen, hydrogen peroxide, and hypochlorite ions, is produced by white blood cells and other biological systems (such as the roots of marigolds) as a means of destroying foreign bodies. Ozone reacts directly with organic double bonds. Also, when ozone breaks down to dioxygen it gives rise to oxygen free radicals, which are highly reactive and capable of damaging many organic molecules. Moreover, it is believed that the powerful oxidizing properties of ozone may be a contributing factor of inflammation. The cause-and-effect relationship of how the ozone is created in the body and what it does is still under consideration and still subject to various interpretations, since other body chemical processes can trigger some of the same reactions. There is evidence linking the antibody-catalyzed water-oxidation pathway of the human immune response to the production of ozone. In this system, ozone is produced by antibody-catalyzed production of trioxidane from water and neutrophil-produced singlet oxygen.
When inhaled, ozone reacts with compounds lining the lungs to form specific, cholesterol-derived metabolites that are thought to facilitate the build-up and pathogenesis of atherosclerotic plaques (a form of heart disease). These metabolites have been confirmed as naturally occurring in human atherosclerotic arteries and are categorized into a class of secosterols termed atheronals, generated by ozonolysis of cholesterol's double bond to form a 5,6 secosterol as well as a secondary condensation product via aldolization.
Impact on plant growth and crop yields
Ozone has been implicated to have an adverse effect on plant growth: "... ozone reduced total chlorophylls, carotenoid and carbohydrate concentration, and increased 1-aminocyclopropane-1-carboxylic acid (ACC) content and ethylene production. In treated plants, the ascorbate leaf pool was decreased, while lipid peroxidation and solute leakage were significantly higher than in ozone-free controls. The data indicated that ozone triggered protective mechanisms against oxidative stress in citrus." Studies that have used pepper plants as a model have shown that ozone decreased fruit yield and changed fruit quality. Furthermore, it was also observed a decrease in chlorophylls levels and antioxidant defences on the leaves, as well as increased the reactive oxygen species (ROS) levels and lipid and protein damages.
A 2022 study concludes that East Asia loses 63 billion dollars in crops per year due to ozone pollution, a byproduct of fossil fuel combustion. China loses about one-third of its potential wheat production and one-fourth of its rice production.
Safety regulations
Because of the strongly oxidizing properties of ozone, ozone is a primary irritant, affecting especially the eyes and respiratory systems and can be hazardous at even low concentrations. The Canadian Centre for Occupation Safety and Health reports that:
Even very low concentrations of ozone can be harmful to the upper respiratory tract and the lungs. The severity of injury depends on both the concentration of ozone and the duration of exposure. Severe and permanent lung injury or death could result from even a very short-term exposure to relatively low concentrations."
To protect workers potentially exposed to ozone, U.S. Occupational Safety and Health Administration has established a permissible exposure limit (PEL) of 0.1 μmol/mol (29 CFR 1910.1000 table Z-1), calculated as an 8-hour time weighted average. Higher concentrations are especially hazardous and NIOSH has established an Immediately Dangerous to Life and Health Limit (IDLH) of 5 μmol/mol. Work environments where ozone is used or where it is likely to be produced should have adequate ventilation and it is prudent to have a monitor for ozone that will alarm if the concentration exceeds the OSHA PEL. Continuous monitors for ozone are available from several suppliers.
Elevated ozone exposure can occur on passenger aircraft, with levels depending on altitude and atmospheric turbulence. U.S. Federal Aviation Administration regulations set a limit of 250 nmol/mol with a maximum four-hour average of 100 nmol/mol. Some planes are equipped with ozone converters in the ventilation system to reduce passenger exposure.
Production
Ozone generators, or ozonators, are used to produce ozone for cleaning air or removing smoke odours in unoccupied rooms. These ozone generators can produce over 3 g of ozone per hour. Ozone often forms in nature under conditions where O2 will not react. Ozone used in industry is measured in μmol/mol (ppm, parts per million), nmol/mol (ppb, parts per billion), μg/m3, mg/h (milligrams per hour) or weight percent. The regime of applied concentrations ranges from 1% to 5% (in air) and from 6% to 14% (in oxygen) for older generation methods. New electrolytic methods can achieve up 20% to 30% dissolved ozone concentrations in output water.
Temperature and humidity play a large role in how much ozone is being produced using traditional generation methods (such as corona discharge and ultraviolet light). Old generation methods will produce less than 50% of nominal capacity if operated with humid ambient air, as opposed to very dry air. New generators, using electrolytic methods, can achieve higher purity and dissolution through using water molecules as the source of ozone production.
Coronal discharge method
This is the most common type of ozone generator for most industrial and personal uses. While variations of the "hot spark" coronal discharge method of ozone production exist, including medical grade and industrial grade ozone generators, these units usually work by means of a corona discharge tube or ozone plate. They are typically cost-effective and do not require an oxygen source other than the ambient air to produce ozone concentrations of 3–6%. Fluctuations in ambient air, due to weather or other environmental conditions, cause variability in ozone production. However, they also produce nitrogen oxides as a by-product. Use of an air dryer can reduce or eliminate nitric acid formation by removing water vapor and increase ozone production. At room temperature, nitric acid will form into a vapour that is hazardous if inhaled. Symptoms can include chest pain, shortness of breath, headaches and a dry nose and throat causing a burning sensation. Use of an oxygen concentrator can further increase the ozone production and further reduce the risk of nitric acid formation by removing not only the water vapor, but also the bulk of the nitrogen.
Ultraviolet light
UV ozone generators, or vacuum-ultraviolet (VUV) ozone generators, employ a light source that generates a narrow-band ultraviolet light, a subset of that produced by the Sun. The Sun's UV sustains the ozone layer in the stratosphere of Earth.
UV ozone generators use ambient air for ozone production, no air prep systems are used (air dryer or oxygen concentrator), therefore these generators tend to be less expensive. However, UV ozone generators usually produce ozone with a concentration of about 0.5% or lower which limits the potential ozone production rate. Another disadvantage of this method is that it requires the ambient air (oxygen) to be exposed to the UV source for a longer amount of time, and any gas that is not exposed to the UV source will not be treated. This makes UV generators impractical for use in situations that deal with rapidly moving air or water streams (in-duct air sterilization, for example). Production of ozone is one of the potential dangers of ultraviolet germicidal irradiation. VUV ozone generators are used in swimming pools and spa applications ranging to millions of gallons of water. VUV ozone generators, unlike corona discharge generators, do not produce harmful nitrogen by-products and also unlike corona discharge systems, VUV ozone generators work extremely well in humid air environments. There is also not normally a need for expensive off-gas mechanisms, and no need for air driers or oxygen concentrators which require extra costs and maintenance.
Cold plasma
In the cold plasma method, pure oxygen gas is exposed to a plasma created by DBD. The diatomic oxygen is split into single atoms, which then recombine in triplets to form ozone.
It is common in the industry to mislabel some DBD ozone generators as CD Corona Discharge generators. Typically all solid flat metal electrode ozone generators produce ozone using the dielectric barrier discharge method. Cold plasma machines use pure oxygen as the input source and produce a maximum concentration of about 24% ozone. They produce far greater quantities of ozone in a given time compared to ultraviolet production that has about 2% efficiency. The discharges manifest as filamentary transfer of electrons (micro discharges) in a gap between two electrodes. In order to evenly distribute the micro discharges, a dielectric insulator must be used to separate the metallic electrodes and to prevent arcing.
Electrolytic
Electrolytic ozone generation (EOG) splits water molecules into H2, O2, and O3.
In most EOG methods, the hydrogen gas will be removed to leave oxygen and ozone as the only reaction products. Therefore, EOG can achieve higher dissolution in water without other competing gases found in corona discharge method, such as nitrogen gases present in ambient air. This method of generation can achieve concentrations of 20–30% and is independent of air quality because water is used as the source material. Production of ozone electrolytically is typically unfavorable because of the high overpotential required to produce ozone as compared to oxygen. This is why ozone is not produced during typical water electrolysis. However, it is possible to increase the overpotential of oxygen by careful catalyst selection such that ozone is preferentially produced under electrolysis. Catalysts typically chosen for this approach are lead dioxide or boron-doped diamond.
The ozone to oxygen ratio is improved by increasing current density at the anode, cooling the electrolyte around the anode close to 0 °C, using an acidic electrolyte (such as dilute sulfuric acid) instead of a basic solution, and by applying pulsed current instead of DC.
Special considerations
Ozone cannot be stored and transported like other industrial gases (because it quickly decays into diatomic oxygen) and must therefore be produced on site. Available ozone generators vary in the arrangement and design of the high-voltage electrodes. At production capacities higher than 20 kg per hour, a gas/water tube heat-exchanger may be utilized as ground electrode and assembled with tubular high-voltage electrodes on the gas-side. The regime of typical gas pressures is around absolute in oxygen and absolute in air. Several megawatts of electrical power may be installed in large facilities, applied as single phase AC current at 50 to 8000 Hz and peak voltages between 3,000 and 20,000 volts. Applied voltage is usually inversely related to the applied frequency.
The dominating parameter influencing ozone generation efficiency is the gas temperature, which is controlled by cooling water temperature and/or gas velocity. The cooler the water, the better the ozone synthesis. The lower the gas velocity, the higher the concentration (but the lower the net ozone produced). At typical industrial conditions, almost 90% of the effective power is dissipated as heat and needs to be removed by a sufficient cooling water flow.
Because of the high reactivity of ozone, only a few materials may be used like stainless steel (quality 316L), titanium, aluminium (as long as no moisture is present), glass, polytetrafluorethylene, or polyvinylidene fluoride. Viton may be used with the restriction of constant mechanical forces and absence of humidity (humidity limitations apply depending on the formulation). Hypalon may be used with the restriction that no water comes in contact with it, except for normal atmospheric levels. Embrittlement or shrinkage is the common mode of failure of elastomers with exposure to ozone. Ozone cracking is the common mode of failure of elastomer seals like O-rings.
Silicone rubbers are usually adequate for use as gaskets in ozone concentrations below 1 wt%, such as in equipment for accelerated aging of rubber samples.
Incidental production
Ozone may be formed from by electrical discharges and by action of high energy electromagnetic radiation. Unsuppressed arcing in electrical contacts, motor brushes, or mechanical switches breaks down the chemical bonds of the atmospheric oxygen surrounding the contacts [ → 2O]. Free radicals of oxygen in and around the arc recombine to create ozone []. Certain electrical equipment generate significant levels of ozone. This is especially true of devices using high voltages, such as ionic air purifiers, laser printers, photocopiers, tasers and arc welders. Electric motors using brushes can generate ozone from repeated sparking inside the unit. Large motors that use brushes, such as those used by elevators or hydraulic pumps, will generate more ozone than smaller motors.
Ozone is similarly formed in the Catatumbo lightning storms phenomenon on the Catatumbo River in Venezuela, though ozone's instability makes it dubious that it has any effect on the ozonosphere.
It is the world's largest single natural generator of ozone, lending calls for it to be designated a UNESCO World Heritage Site.
Laboratory production
In the laboratory, ozone can be produced by electrolysis using a 9 volt battery, a pencil graphite rod cathode, a platinum wire anode and a 3 molar sulfuric acid electrolyte. The half cell reactions taking place are:
where represents the standard electrode potential.
In the net reaction, three equivalents of water are converted into one equivalent of ozone and three equivalents of hydrogen. Oxygen formation is a competing reaction.
It can also be generated by a high voltage arc. In its simplest form, high voltage AC, such as the output of a neon-sign transformer is connected to two metal rods with the ends placed sufficiently close to each other to allow an arc. The resulting arc will convert atmospheric oxygen to ozone.
It is often desirable to contain the ozone. This can be done with an apparatus consisting of two concentric glass tubes sealed together at the top with gas ports at the top and bottom of the outer tube. The inner core should have a length of metal foil inserted into it connected to one side of the power source. The other side of the power source should be connected to another piece of foil wrapped around the outer tube. A source of dry is applied to the bottom port. When high voltage is applied to the foil leads, electricity will discharge between the dry dioxygen in the middle and form and which will flow out the top port. This is called a Siemen's ozoniser. The reaction can be summarized as follows:
3O2 ->[\text{electricity}] 2O3
Applications
Industry
The largest use of ozone is in the preparation of pharmaceuticals, synthetic lubricants, and many other commercially useful organic compounds, where it is used to sever carbon-carbon bonds. It can also be used for bleaching substances and for killing microorganisms in air and water sources. Many municipal drinking water systems kill bacteria with ozone instead of the more common chlorine. Ozone has a very high oxidation potential. Ozone does not form organochlorine compounds, nor does it remain in the water after treatment. Ozone can form the suspected carcinogen bromate in source water with high bromide concentrations. The U.S. Safe Drinking Water Act mandates that these systems introduce an amount of chlorine to maintain a minimum of 0.2 μmol/mol residual free chlorine in the pipes, based on results of regular testing. Where electrical power is abundant, ozone is a cost-effective method of treating water, since it is produced on demand and does not require transportation and storage of hazardous chemicals. Once it has decayed, it leaves no taste or odour in drinking water.
Although low levels of ozone have been advertised to be of some disinfectant use in residential homes, the concentration of ozone in dry air required to have a rapid, substantial effect on airborne pathogens exceeds safe levels recommended by the U.S. Occupational Safety and Health Administration and Environmental Protection Agency. Humidity control can vastly improve both the killing power of the ozone and the rate at which it decays back to oxygen (more humidity allows more effectiveness). Spore forms of most pathogens are very tolerant of atmospheric ozone in concentrations at which asthma patients start to have issues.
In 1908 artificial ozonisation of the Central Line of the London Underground was introduced for aerial disinfection. The process was found to be worthwhile, but was phased out by 1956. However the beneficial effect was maintained by the ozone created incidentally from the electrical discharges of the train motors (see above: Incidental production).
Ozone generators were made available to schools and universities in Wales for the Autumn term 2021, to disinfect classrooms after COVID-19 outbreaks.
Industrially, ozone is used to:
Disinfect laundry in hospitals, food factories, care homes etc.;
Disinfect water in place of chlorine
Deodorize air and objects, such as after a fire. This process is extensively used in fabric restoration
Kill bacteria on food or on contact surfaces;
Water intense industries such as breweries and dairy plants can make effective use of dissolved ozone as a replacement to chemical sanitizers such as peracetic acid, hypochlorite or heat.
Disinfect cooling towers and control Legionella with reduced chemical consumption, water bleed-off and increased performance.
Sanitize swimming pools and spas
Kill insects in stored grain
Scrub yeast and mold spores from the air in food processing plants;
Wash fresh fruits and vegetables to kill yeast, mold and bacteria;
Chemically attack contaminants in water (iron, arsenic, hydrogen sulfide, nitrites, and complex organics lumped together as "colour");
Provide an aid to flocculation (agglomeration of molecules, which aids in filtration, where the iron and arsenic are removed);
Manufacture chemical compounds via chemical synthesis
Clean and bleach fabrics (the former use is utilized in fabric restoration; the latter use is patented);
Act as an antichlor in chlorine-based bleaching;
Assist in processing plastics to allow adhesion of inks;
Age rubber samples to determine the useful life of a batch of rubber;
Eradicate water-borne parasites such as Giardia lamblia and Cryptosporidium in surface water treatment plants.
Ozone is a reagent in many organic reactions in the laboratory and in industry. Ozonolysis is the cleavage of an alkene to carbonyl compounds.
Many hospitals around the world use large ozone generators to decontaminate operating rooms between surgeries. The rooms are cleaned and then sealed airtight before being filled with ozone which effectively kills or neutralizes all remaining bacteria.
Ozone is used as an alternative to chlorine or chlorine dioxide in the bleaching of wood pulp. It is often used in conjunction with oxygen and hydrogen peroxide to eliminate the need for chlorine-containing compounds in the manufacture of high-quality, white paper.
Ozone can be used to detoxify cyanide wastes (for example from gold and silver mining) by oxidizing cyanide to cyanate and eventually to carbon dioxide.
Water disinfection
Since the invention of Dielectric Barrier Discharge (DBD) plasma reactors, it has been employed for water treatment with ozone. However, with cheaper alternative disinfectants like chlorine, such applications of DBD ozone water decontamination have been limited by high power consumption and bulky equipment. Despite this, with research revealing the negative impacts of common disinfectants like chlorine with respect to toxic residuals and ineffectiveness in killing certain micro-organisms, DBD plasma-based ozone decontamination is of interest in current available technologies. Although ozonation of water with a high concentration of bromide does lead to the formation of undesirable brominated disinfection byproducts, unless drinking water is produced by desalination, ozonation can generally be applied without concern for these byproducts. Advantages of ozone include high thermodynamic oxidation potential, less sensitivity to organic material and better tolerance for pH variations while retaining the ability to kill bacteria, fungi, viruses, as well as spores and cysts. Although, ozone has been widely accepted in Europe for decades, it is sparingly used for decontamination in the U.S. due to limitations of high-power consumption, bulky installation and stigma attached with ozone toxicity. Considering this, recent research efforts have been directed towards the study of effective ozone water treatment systems. Researchers have looked into lightweight and compact low power surface DBD reactors, energy efficient volume DBD reactors and low power micro-scale DBD reactors. Such studies can help pave the path to re-acceptance of DBD plasma-based ozone decontamination of water, especially in the U.S.
Consumers
Ozone levels which are safe for people are ineffective at killing fungi and bacteria. Some consumer disinfection and cosmetic products emit ozone at levels harmful to human health.
Devices generating high levels of ozone, some of which use ionization, are used to sanitize and deodorize uninhabited buildings, rooms, ductwork, woodsheds, boats and other vehicles.
Ozonated water is used to launder clothes and to sanitize food, drinking water, and surfaces in the home. According to the U.S. Food and Drug Administration (FDA), it is "amending the food additive regulations to provide for the safe use of ozone in gaseous and aqueous phases as an antimicrobial agent on food, including meat and poultry." Studies at California Polytechnic University demonstrated that 0.3 μmol/mol levels of ozone dissolved in filtered tapwater can produce a reduction of more than 99.99% in such food-borne microorganisms as salmonella, E. coli 0157:H7 and Campylobacter. This quantity is 20,000 times the WHO-recommended limits stated above.
Ozone can be used to remove pesticide residues from fruits and vegetables.
Ozone is used in homes and hot tubs to kill bacteria in the water and to reduce the amount of chlorine or bromine required by reactivating them to their free state. Since ozone does not remain in the water long enough, ozone by itself is ineffective at preventing cross-contamination among bathers and must be used in conjunction with halogens. Gaseous ozone created by ultraviolet light or by corona discharge is injected into the water.
Ozone is also widely used in the treatment of water in aquariums and fishponds. Its use can minimize bacterial growth, control parasites, eliminate transmission of some diseases, and reduce or eliminate "yellowing" of the water. Ozone must not come in contact with fishes' gill structures. Natural saltwater (with life forms) provides enough "instantaneous demand" that controlled amounts of ozone activate bromide ions to hypobromous acid, and the ozone entirely decays in a few seconds to minutes. If oxygen-fed ozone is used, the water will be higher in dissolved oxygen and fishes' gill structures will atrophy, making them dependent on oxygen-enriched water.
Aquaculture
Ozonation – a process of infusing water with ozone – can be used in aquaculture to facilitate organic breakdown. Ozone is also added to recirculating systems to reduce nitrite levels through conversion into nitrate. If nitrite levels in the water are high, nitrites will also accumulate in the blood and tissues of fish, where it interferes with oxygen transport (it causes oxidation of the heme-group of haemoglobin from ferrous () to ferric (), making haemoglobin unable to bind ). Despite these apparent positive effects, ozone use in recirculation systems has been linked to reducing the level of bioavailable iodine in salt water systems, resulting in iodine deficiency symptoms such as goitre and decreased growth in Senegalese sole (Solea senegalensis) larvae.
Ozonate seawater is used for surface disinfection of haddock and Atlantic halibut eggs against nodavirus. Nodavirus is a lethal and vertically transmitted virus which causes severe mortality in fish. Haddock eggs should not be treated with high ozone level as eggs so treated did not hatch and died after 3–4 days.
Agriculture
Ozone application on freshly cut pineapple and banana shows increase in flavonoids and total phenol contents when exposure is up to 20 minutes. Decrease in ascorbic acid (one form of vitamin C) content is observed but the positive effect on total phenol content and flavonoids can overcome the negative effect. Tomatoes upon treatment with ozone show an increase in β-carotene, lutein and lycopene. However, ozone application on strawberries in pre-harvest period shows decrease in ascorbic acid content.
Ozone facilitates the extraction of some heavy metals from soil using EDTA. EDTA forms strong, water-soluble coordination compounds with some heavy metals (Pb, Zn) thereby making it possible to dissolve them out from contaminated soil. If contaminated soil is pre-treated with ozone, the extraction efficacy of Pb, Am and Pu increases by 11.0–28.9%, 43.5% and 50.7% respectively.
Effect on pollinators
Crop pollination is an essential part of an ecosystem. Ozone can have detrimental effects on plant-pollinator interactions. Pollinators carry pollen from one plant to another. This is an essential cycle inside of an ecosystem. Causing changes in certain atmospheric conditions around pollination sites or with xenobiotics could cause unknown changes to the natural cycles of pollinators and flowering plants. In a study conducted in North-Western Europe, crop pollinators were negatively affected more when ozone levels were higher.
Alternative medicine
The use of ozone for the treatment of medical conditions is not supported by high quality evidence, and is generally considered alternative medicine.
See also
Chappuis absorption
Cyclic ozone
Global Ozone Monitoring by Occultation of Stars (GOMOS)
International Day for the Preservation of the Ozone Layer (September 16)
Lightning
Nitrogen oxides
Ozone depletion, including the phenomenon known as the ozone hole.
Ozone Monitor
Ozone Monitoring Instrument
Ozone therapy
Ozoneweb
Ozonide (ion)
Ozonolysis
Polymer degradation
Sterilization (microbiology)
References
Footnotes
Citations
Further reading
Becker, K. H., U. Kogelschatz, K. H. Schoenbach, R. J. Barker (ed.). Non-Equilibrium Air Plasmas at Atmospheric Pressure. Series in Plasma Physics. Bristol and Philadelphia: Institute of Physics Publishing Ltd; ; 2005
United States Environmental Protection Agency. Risk and Benefits Group. (August 2014). Health Risk and Exposure Assessment for Ozone: Final Report.
External links
International Ozone Association
European Environment Agency's near real-time ozone map (ozoneweb)
NASA's Ozone Resource Page
OSHA Ozone Information
Paul Crutzen Interview—Video of Nobel Laureate Paul Crutzen talking to Nobel Laureate Harry Kroto by the Vega Science Trust
NASA's Earth Observatory article on Ozone
International Chemical Safety Card 0068
NIOSH Pocket Guide to Chemical Hazards
National Institute of Environmental Health Sciences, Ozone Information
NASA Study Links "Smog" to Arctic Warming—NASA Goddard Institute for Space Studies (GISS) study shows the warming effect of ozone in the Arctic during winter and spring.
Ground-level ozone information from the American Lung Association of New England
Air pollution
Allotropes of oxygen
Disinfectants
Environmental chemistry
Gases with color
Greenhouse gases
Industrial gases
Oxidizing agents
Pollution
Odor | Ozone | [
"Chemistry",
"Environmental_science"
] | 15,308 | [
"Redox",
"Allotropes",
"Environmental chemistry",
"Oxidizing agents",
"Ozone",
"Allotropes of oxygen",
"Industrial gases",
"nan",
"Chemical process engineering",
"Greenhouse gases"
] |
22,739 | https://en.wikipedia.org/wiki/Obfuscation%20%28software%29 | In software development, obfuscation is the practice of creating source or machine code that is intentionally difficult for humans or computers to understand. Similar to obfuscation in natural language, code obfuscation may involve using unnecessarily roundabout ways to write statements. Programmers often obfuscate code to conceal its purpose, logic, or embedded values. The primary reasons for doing so are to prevent tampering, deter reverse engineering, or to create a puzzle or recreational challenge to deobfuscate the code, a challenge often included in crackmes. While obfuscation can be done manually, it is more commonly performed using obfuscators.
Overview
The architecture and characteristics of some languages may make them easier to obfuscate than others. C, C++, and the Perl programming language are some examples of languages easy to obfuscate. Haskell is also quite obfuscatable despite being quite different in structure.
The properties that make a language obfuscatable are not immediately obvious.
Techniques
Types of obfuscations include simple keyword substitution, use or non-use of whitespace to create artistic effects, and self-generating or heavily compressed programs.
According to Nick Montfort, techniques may include:
naming obfuscation, which includes naming variables in a meaningless or deceptive way;
data/code/comment confusion, which includes making some actual code look like comments or confusing syntax with data;
double coding, which can be displaying code in poetry form or interesting shapes.
Automated tools
A variety of tools exist to perform or assist with code obfuscation. These include experimental research tools developed by academics, hobbyist tools, commercial products written by professionals, and open-source software. Additionally, deobfuscation tools exist, aiming to reverse the obfuscation process.
While most commercial obfuscation solutions transform either program source code or platform-independent bytecode (as used by Java and .NET), some also work directly on compiled binaries.
Some Python examples can be found in the official Python programming FAQ and elsewhere.
The movfuscator C compiler for the x86_32 ISA uses only the mov instruction in order to obfuscate.
Recreational
Writing and reading obfuscated source code can be a brain teaser. A number of programming contests reward the most creatively obfuscated code, such as the International Obfuscated C Code Contest and the Obfuscated Perl Contest.
Short obfuscated Perl programs may be used in signatures of Perl programmers. These are JAPHs ("Just another Perl hacker").
Cryptographic
Cryptographers have explored the idea of obfuscating code so that reverse-engineering the code is cryptographically hard. This is formalized in the many proposals for indistinguishability obfuscation, a cryptographic primitive that, if possible to build securely, would allow one to construct many other kinds of cryptography, including completely novel types that no one knows how to make. (A stronger notion, black-box obfuscation, is known to be impossible in general.)
Disadvantages of obfuscation
While obfuscation can make reading, writing, and reverse-engineering a program difficult and time-consuming, it will not necessarily make it impossible.
It adds time and complexity to the build process for the developers.
It can make debugging issues after the software has been obfuscated extremely difficult.
Once code is no longer maintained, hobbyists may want to maintain the program, add mods, or understand it better. Obfuscation makes it hard for end users to do useful things with the code.
Certain kinds of obfuscation (i.e. code that isn't just a local binary and downloads mini binaries from a web server as needed) can degrade performance and/or require Internet.
Notifying users of obfuscated code
Some anti-virus softwares, such as AVG AntiVirus, will also alert their users when they land on a website with code that is manually obfuscated, as one of the purposes of obfuscation can be to hide malicious code. However, some developers may employ code obfuscation for the purpose of reducing file size or increasing security. The average user may not expect their antivirus software to provide alerts about an otherwise harmless piece of code, especially from trusted corporations, so such a feature may actually deter users from using legitimate software.
Mozilla and Google disallow browser extensions containing obfuscated code in their add-ons store.
Obfuscation and copyleft licenses
There has been debate on whether it is illegal to skirt copyleft software licenses by releasing source code in obfuscated form, such as in cases in which the author is less willing to make the source code available. The issue is addressed in the GNU General Public License by requiring the "preferred form for making modifications" to be made available. The GNU website states "Obfuscated 'source code' is not real source code and does not count as source code."
Decompilers
A decompiler is a tool that can reverse-engineer source code from an executable or library. This process is sometimes referred to as a man-in-the-end (mite) attack, inspired by the traditional "man-in-the-middle attack" in cryptography. The decompiled source code is often hard to read, containing random function and variable names, incorrect variable types, and logic that differs from the original source code due to compiler optimizations.
Model obfuscation
Model obfuscation is a technique to hide the internal structure of a machine learning model. Obfuscation turns a model into a black box. It is contrary to explainable AI. Obfuscation models can also be applied to training data before feeding it into the model to add random noise. This hides sensitive information about the properties of individual and groups of samples.
See also
AARD code
Spaghetti code
Decompilation
Esoteric programming language
Quine
Overlapping instructions
Polymorphic code
Hardware obfuscation
Underhanded C Contest
Source-to-source compiler
ProGuard (Java Obfuscator)
Dotfuscator (.Net Obfuscator)
Digital rights management
Indistinguishability obfuscation
Source code beautification
References
Further reading
Seyyedhamzeh, Javad, ABCME: A Novel Metamorphic Engine, 17th National Computer Conference, Sharif University of Technology, Tehran, Iran, 2012.
B. Barak, O. Goldreich, R. Impagliazzo, S. Rudich, A. Sahai, S. Vadhan and K. Yang. "On the (Im)possibility of Obfuscating Programs". 21st Annual International Cryptology Conference, Santa Barbara, California, USA. Springer Verlag LNCS Volume 2139, 2001.
External links
The International Obfuscated C Code Contest
Protecting Java Code Via Code Obfuscation, ACM Crossroads, Spring 1998 issue
Can we obfuscate programs?
Yury Lifshits. Lecture Notes on Program Obfuscation (Spring'2005)
c2:BlackBoxComputation
Source code
Program transformation
es:Ofuscación#Informática | Obfuscation (software) | [
"Technology",
"Engineering"
] | 1,469 | [
"Cybersecurity engineering",
"Software obfuscation"
] |
22,773 | https://en.wikipedia.org/wiki/Oxidative%20phosphorylation | Oxidative phosphorylation (UK , US ) or electron transport-linked phosphorylation or terminal oxidation is the metabolic pathway in which cells use enzymes to oxidize nutrients, thereby releasing chemical energy in order to produce adenosine triphosphate (ATP). In eukaryotes, this takes place inside mitochondria. Almost all aerobic organisms carry out oxidative phosphorylation. This pathway is so pervasive because it releases more energy than alternative fermentation processes such as anaerobic glycolysis.
The energy stored in the chemical bonds of glucose is released by the cell in the citric acid cycle, producing carbon dioxide and the energetic electron donors NADH and FADH. Oxidative phosphorylation uses these molecules and O2 to produce ATP, which is used throughout the cell whenever energy is needed. During oxidative phosphorylation, electrons are transferred from the electron donors to a series of electron acceptors in a series of redox reactions ending in oxygen, whose reaction releases half of the total energy.
In eukaryotes, these redox reactions are catalyzed by a series of protein complexes within the inner membrane of the cell's mitochondria, whereas, in prokaryotes, these proteins are located in the cell's outer membrane. These linked sets of proteins are called the electron transport chain. In eukaryotes, five main protein complexes are involved, whereas in prokaryotes many different enzymes are present, using a variety of electron donors and acceptors.
The energy transferred by electrons flowing through this electron transport chain is used to transport protons across the inner mitochondrial membrane, in a process called electron transport. This generates potential energy in the form of a pH gradient and the resulting electrical potential across this membrane. This store of energy is tapped when protons flow back across the membrane and down the potential energy gradient, through a large enzyme called ATP synthase in a process called chemiosmosis. The ATP synthase uses the energy to transform adenosine diphosphate (ADP) into adenosine triphosphate, in a phosphorylation reaction. The reaction is driven by the proton flow, which forces the rotation of a part of the enzyme. The ATP synthase is a rotary mechanical motor.
Although oxidative phosphorylation is a vital part of metabolism, it produces reactive oxygen species such as superoxide and hydrogen peroxide, which lead to propagation of free radicals, damaging cells and contributing to disease and, possibly, aging and senescence. The enzymes carrying out this metabolic pathway are also the target of many drugs and poisons that inhibit their activities.
Chemiosmosis
Oxidative phosphorylation works by using energy-releasing chemical reactions to drive energy-requiring reactions. The two sets of reactions are said to be coupled. This means one cannot occur without the other. The chain of redox reactions driving the flow of electrons through the electron transport chain, from electron donors such as NADH to electron acceptors such as oxygen and hydrogen (protons), is an exergonic process – it releases energy, whereas the synthesis of ATP is an endergonic process, which requires an input of energy. Both the electron transport chain and the ATP synthase are embedded in a membrane, and energy is transferred from the electron transport chain to the ATP synthase by movements of protons across this membrane, in a process called chemiosmosis. A current of protons is driven from the negative N-side of the membrane to the positive P-side through the proton-pumping enzymes of the electron transport chain. The movement of protons creates an electrochemical gradient across the membrane, is called the proton-motive force. It has two components: a difference in proton concentration (a H+ gradient, ΔpH) and a difference in electric potential, with the N-side having a negative charge.
ATP synthase releases this stored energy by completing the circuit and allowing protons to flow down the electrochemical gradient, back to the N-side of the membrane. The electrochemical gradient drives the rotation of part of the enzyme's structure and couples this motion to the synthesis of ATP.
The two components of the proton-motive force are thermodynamically equivalent: In mitochondria, the largest part of energy is provided by the potential; in alkaliphile bacteria the electrical energy even has to compensate for a counteracting inverse pH difference. Inversely, chloroplasts operate mainly on ΔpH. However, they also require a small membrane potential for the kinetics of ATP synthesis. In the case of the fusobacterium Propionigenium modestum it drives the counter-rotation of subunits a and c of the FO motor of ATP synthase.
The amount of energy released by oxidative phosphorylation is high, compared with the amount produced by anaerobic fermentation. Glycolysis produces only 2 ATP molecules, but somewhere between 30 and 36 ATPs are produced by the oxidative phosphorylation of the 10 NADH and 2 succinate molecules made by converting one molecule of glucose to carbon dioxide and water, while each cycle of beta oxidation of a fatty acid yields about 14 ATPs. These ATP yields are theoretical maximum values; in practice, some protons leak across the membrane, lowering the yield of ATP.
Electron and proton transfer molecules
The electron transport chain carries both protons and electrons, passing electrons from donors to acceptors, and transporting protons across a membrane. These processes use both soluble and protein-bound transfer molecules. In the mitochondria, electrons are transferred within the intermembrane space by the water-soluble electron transfer protein cytochrome c. This carries only electrons, and these are transferred by the reduction and oxidation of an iron atom that the protein holds within a heme group in its structure. Cytochrome c is also found in some bacteria, where it is located within the periplasmic space.
Within the inner mitochondrial membrane, the lipid-soluble electron carrier coenzyme Q10 (Q) carries both electrons and protons by a redox cycle. This small benzoquinone molecule is very hydrophobic, so it diffuses freely within the membrane. When Q accepts two electrons and two protons, it becomes reduced to the ubiquinol form (QH2); when QH2 releases two electrons and two protons, it becomes oxidized back to the ubiquinone (Q) form. As a result, if two enzymes are arranged so that Q is reduced on one side of the membrane and QH2 oxidized on the other, ubiquinone will couple these reactions and shuttle protons across the membrane. Some bacterial electron transport chains use different quinones, such as menaquinone, in addition to ubiquinone.
Within proteins, electrons are transferred between flavin cofactors, iron–sulfur clusters and cytochromes. There are several types of iron–sulfur cluster. The simplest kind found in the electron transfer chain consists of two iron atoms joined by two atoms of inorganic sulfur; these are called [2Fe–2S] clusters. The second kind, called [4Fe–4S], contains a cube of four iron atoms and four sulfur atoms. Each iron atom in these clusters is coordinated by an additional amino acid, usually by the sulfur atom of cysteine. Metal ion cofactors undergo redox reactions without binding or releasing protons, so in the electron transport chain they serve solely to transport electrons through proteins. Electrons move quite long distances through proteins by hopping along chains of these cofactors. This occurs by quantum tunnelling, which is rapid over distances of less than 1.4 m.
Eukaryotic electron transport chains
Many catabolic biochemical processes, such as glycolysis, the citric acid cycle, and beta oxidation, produce the reduced coenzyme NADH. This coenzyme contains electrons that have a high transfer potential; in other words, they will release a large amount of energy upon oxidation. However, the cell does not release this energy all at once, as this would be an uncontrollable reaction. Instead, the electrons are removed from NADH and passed to oxygen through a series of enzymes that each release a small amount of the energy. This set of enzymes, consisting of complexes I through IV, is called the electron transport chain and is found in the inner membrane of the mitochondrion. Succinate is also oxidized by the electron transport chain, but feeds into the pathway at a different point.
In eukaryotes, the enzymes in this electron transport system use the energy released from O2 by NADH to pump protons across the inner membrane of the mitochondrion. This causes protons to build up in the intermembrane space, and generates an electrochemical gradient across the membrane. The energy stored in this potential is then used by ATP synthase to produce ATP. Oxidative phosphorylation in the eukaryotic mitochondrion is the best-understood example of this process. The mitochondrion is present in almost all eukaryotes, with the exception of anaerobic protozoa such as Trichomonas vaginalis that instead reduce protons to hydrogen in a remnant mitochondrion called a hydrogenosome.
NADH-coenzyme Q oxidoreductase (complex I)
NADH-coenzyme Q oxidoreductase, also known as NADH dehydrogenase or complex I, is the first protein in the electron transport chain. Complex I is a giant enzyme with the mammalian complex I having 46 subunits and a molecular mass of about 1,000 kilodaltons (kDa). The structure is known in detail only from a bacterium; in most organisms the complex resembles a boot with a large "ball" poking out from the membrane into the mitochondrion. The genes that encode the individual proteins are contained in both the cell nucleus and the mitochondrial genome, as is the case for many enzymes present in the mitochondrion.
The reaction that is catalyzed by this enzyme is the two electron oxidation of NADH by coenzyme Q10 or ubiquinone (represented as Q in the equation below), a lipid-soluble quinone that is found in the mitochondrion membrane:
The start of the reaction, and indeed of the entire electron chain, is the binding of a NADH molecule to complex I and the donation of two electrons. The electrons enter complex I via a prosthetic group attached to the complex, flavin mononucleotide (FMN). The addition of electrons to FMN converts it to its reduced form, FMNH2. The electrons are then transferred through a series of iron–sulfur clusters: the second kind of prosthetic group present in the complex. There are both [2Fe–2S] and [4Fe–4S] iron–sulfur clusters in complex I.
As the electrons pass through this complex, four protons are pumped from the matrix into the intermembrane space. Exactly how this occurs is unclear, but it seems to involve conformational changes in complex I that cause the protein to bind protons on the N-side of the membrane and release them on the P-side of the membrane. Finally, the electrons are transferred from the chain of iron–sulfur clusters to a ubiquinone molecule in the membrane. Reduction of ubiquinone also contributes to the generation of a proton gradient, as two protons are taken up from the matrix as it is reduced to ubiquinol (QH2).
Succinate-Q oxidoreductase (complex II)
Succinate-Q oxidoreductase, also known as complex II or succinate dehydrogenase, is a second entry point to the electron transport chain. It is unusual because it is the only enzyme that is part of both the citric acid cycle and the electron transport chain. Complex II consists of four protein subunits and contains a bound flavin adenine dinucleotide (FAD) cofactor, iron–sulfur clusters, and a heme group that does not participate in electron transfer to coenzyme Q, but is believed to be important in decreasing production of reactive oxygen species. It oxidizes succinate to fumarate and reduces ubiquinone. As this reaction releases less energy than the oxidation of NADH, complex II does not transport protons across the membrane and does not contribute to the proton gradient.
In some eukaryotes, such as the parasitic worm Ascaris suum, an enzyme similar to complex II, fumarate reductase (menaquinol:fumarate
oxidoreductase, or QFR), operates in reverse to oxidize ubiquinol and reduce fumarate. This allows the worm to survive in the anaerobic environment of the large intestine, carrying out anaerobic oxidative phosphorylation with fumarate as the electron acceptor. Another unconventional function of complex II is seen in the malaria parasite Plasmodium falciparum. Here, the reversed action of complex II as an oxidase is important in regenerating ubiquinol, which the parasite uses in an unusual form of pyrimidine biosynthesis.
Electron transfer flavoprotein-Q oxidoreductase
Electron transfer flavoprotein-ubiquinone oxidoreductase (ETF-Q oxidoreductase), also known as electron transferring-flavoprotein dehydrogenase, is a third entry point to the electron transport chain. It is an enzyme that accepts electrons from electron-transferring flavoprotein in the mitochondrial matrix, and uses these electrons to reduce ubiquinone. This enzyme contains a flavin and a [4Fe–4S] cluster, but, unlike the other respiratory complexes, it attaches to the surface of the membrane and does not cross the lipid bilayer.
In mammals, this metabolic pathway is important in beta oxidation of fatty acids and catabolism of amino acids and choline, as it accepts electrons from multiple acetyl-CoA dehydrogenases. In plants, ETF-Q oxidoreductase is also important in the metabolic responses that allow survival in extended periods of darkness.
Q-cytochrome c oxidoreductase (complex III)
Q-cytochrome c oxidoreductase is also known as cytochrome c reductase, cytochrome bc1 complex, or simply complex III. In mammals, this enzyme is a dimer, with each subunit complex containing 11 protein subunits, an [2Fe-2S] iron–sulfur cluster and three cytochromes: one cytochrome c1 and two b cytochromes. A cytochrome is a kind of electron-transferring protein that contains at least one heme group. The iron atoms inside complex III's heme groups alternate between a reduced ferrous (+2) and oxidized ferric (+3) state as the electrons are transferred through the protein.
The reaction catalyzed by complex III is the oxidation of one molecule of ubiquinol and the reduction of two molecules of cytochrome c, a heme protein loosely associated with the mitochondrion. Unlike coenzyme Q, which carries two electrons, cytochrome c carries only one electron.
As only one of the electrons can be transferred from the QH2 donor to a cytochrome c acceptor at a time, the reaction mechanism of complex III is more elaborate than those of the other respiratory complexes, and occurs in two steps called the Q cycle. In the first step, the enzyme binds three substrates, first, QH2, which is then oxidized, with one electron being passed to the second substrate, cytochrome c. The two protons released from QH2 pass into the intermembrane space. The third substrate is Q, which accepts the second electron from the QH2 and is reduced to Q.−, which is the ubisemiquinone free radical. The first two substrates are released, but this ubisemiquinone intermediate remains bound. In the second step, a second molecule of QH2 is bound and again passes its first electron to a cytochrome c acceptor. The second electron is passed to the bound ubisemiquinone, reducing it to QH2 as it gains two protons from the mitochondrial matrix. This QH2 is then released from the enzyme.
As coenzyme Q is reduced to ubiquinol on the inner side of the membrane and oxidized to ubiquinone on the other, a net transfer of protons across the membrane occurs, adding to the proton gradient. The rather complex two-step mechanism by which this occurs is important, as it increases the efficiency of proton transfer. If, instead of the Q cycle, one molecule of QH2 were used to directly reduce two molecules of cytochrome c, the efficiency would be halved, with only one proton transferred per cytochrome c reduced.
Cytochrome c oxidase (complex IV)
Cytochrome c oxidase, also known as complex IV, is the final protein complex in the electron transport chain. The mammalian enzyme has an extremely complicated structure and contains 13 subunits, two heme groups, as well as multiple metal ion cofactors – in all, three atoms of copper, one of magnesium and one of zinc.
This enzyme mediates the final reaction in the electron transport chain and transfers electrons to oxygen and hydrogen (protons), while pumping protons across the membrane. The final electron acceptor oxygen is reduced to water in this step. Both the direct pumping of protons and the consumption of matrix protons in the reduction of oxygen contribute to the proton gradient. The reaction catalyzed is the oxidation of cytochrome c and the reduction of oxygen:
Alternative reductases and oxidases
Many eukaryotic organisms have electron transport chains that differ from the much-studied mammalian enzymes described above. For example, plants have alternative NADH oxidases, which oxidize NADH in the cytosol rather than in the mitochondrial matrix, and pass these electrons to the ubiquinone pool. These enzymes do not transport protons, and, therefore, reduce ubiquinone without altering the electrochemical gradient across the inner membrane.
Another example of a divergent electron transport chain is the alternative oxidase, which is found in plants, as well as some fungi, protists, and possibly some animals. This enzyme transfers electrons directly from ubiquinol to oxygen.
The electron transport pathways produced by these alternative NADH and ubiquinone oxidases have lower ATP yields than the full pathway. The advantages produced by a shortened pathway are not entirely clear. However, the alternative oxidase is produced in response to stresses such as cold, reactive oxygen species, and infection by pathogens, as well as other factors that inhibit the full electron transport chain. Alternative pathways might, therefore, enhance an organism's resistance to injury, by reducing oxidative stress.
Organization of complexes
The original model for how the respiratory chain complexes are organized was that they diffuse freely and independently in the mitochondrial membrane. However, recent data suggest that the complexes might form higher-order structures called supercomplexes or "respirasomes". In this model, the various complexes exist as organized sets of interacting enzymes. These associations might allow channeling of substrates between the various enzyme complexes, increasing the rate and efficiency of electron transfer. Within such mammalian supercomplexes, some components would be present in higher amounts than others, with some data suggesting a ratio between complexes I/II/III/IV and the ATP synthase of approximately 1:1:3:7:4. However, the debate over this supercomplex hypothesis is not completely resolved, as some data do not appear to fit with this model.
Prokaryotic electron transport chains
In contrast to the general similarity in structure and function of the electron transport chains in eukaryotes, bacteria and archaea possess a large variety of electron-transfer enzymes. These use an equally wide set of chemicals as substrates. In common with eukaryotes, prokaryotic electron transport uses the energy released from the oxidation of a substrate to pump ions across a membrane and generate an electrochemical gradient. In the bacteria, oxidative phosphorylation in Escherichia coli is understood in most detail, while archaeal systems are at present poorly understood.
The main difference between eukaryotic and prokaryotic oxidative phosphorylation is that bacteria and archaea use many different substances to donate or accept electrons. This allows prokaryotes to grow under a wide variety of environmental conditions. In E. coli, for example, oxidative phosphorylation can be driven by a large number of pairs of reducing agents and oxidizing agents, which are listed below. The midpoint potential of a chemical measures how much energy is released when it is oxidized or reduced, with reducing agents having negative potentials and oxidizing agents positive potentials.
As shown above, E. coli can grow with reducing agents such as formate, hydrogen, or lactate as electron donors, and nitrate, DMSO, or oxygen as acceptors. The larger the difference in midpoint potential between an oxidizing and reducing agent, the more energy is released when they react. Out of these compounds, the succinate/fumarate pair is unusual, as its midpoint potential is close to zero. Succinate can therefore be oxidized to fumarate if a strong oxidizing agent such as oxygen is available, or fumarate can be reduced to succinate using a strong reducing agent such as formate. These alternative reactions are catalyzed by succinate dehydrogenase and fumarate reductase, respectively.
Some prokaryotes use redox pairs that have only a small difference in midpoint potential. For example, nitrifying bacteria such as Nitrobacter oxidize nitrite to nitrate, donating the electrons to oxygen. The small amount of energy released in this reaction is enough to pump protons and generate ATP, but not enough to produce NADH or NADPH directly for use in anabolism. This problem is solved by using a nitrite oxidoreductase to produce enough proton-motive force to run part of the electron transport chain in reverse, causing complex I to generate NADH.
Prokaryotes control their use of these electron donors and acceptors by varying which enzymes are produced, in response to environmental conditions. This flexibility is possible because different oxidases and reductases use the same ubiquinone pool. This allows many combinations of enzymes to function together, linked by the common ubiquinol intermediate. These respiratory chains therefore have a modular design, with easily interchangeable sets of enzyme systems.
In addition to this metabolic diversity, prokaryotes also possess a range of isozymes – different enzymes that catalyze the same reaction. For example, in E. coli, there are two different types of ubiquinol oxidase using oxygen as an electron acceptor. Under highly aerobic conditions, the cell uses an oxidase with a low affinity for oxygen that can transport two protons per electron. However, if levels of oxygen fall, they switch to an oxidase that transfers only one proton per electron, but has a high affinity for oxygen.
ATP synthase (complex V)
ATP synthase, also called complex V, is the final enzyme in the oxidative phosphorylation pathway. This enzyme is found in all forms of life and functions in the same way in both prokaryotes and eukaryotes. The enzyme uses the energy stored in a proton gradient across a membrane to drive the synthesis of ATP from ADP and phosphate (Pi). Estimates of the number of protons required to synthesize one ATP have ranged from three to four, with some suggesting cells can vary this ratio, to suit different conditions.
This phosphorylation reaction is an equilibrium, which can be shifted by altering the proton-motive force. In the absence of a proton-motive force, the ATP synthase reaction will run from right to left, hydrolyzing ATP and pumping protons out of the matrix across the membrane. However, when the proton-motive force is high, the reaction is forced to run in the opposite direction; it proceeds from left to right, allowing protons to flow down their concentration gradient and turning ADP into ATP. Indeed, in the closely related vacuolar type H+-ATPases, the hydrolysis reaction is used to acidify cellular compartments, by pumping protons and hydrolysing ATP.
ATP synthase is a massive protein complex with a mushroom-like shape. The mammalian enzyme complex contains 16 subunits and has a mass of approximately 600 kilodaltons. The portion embedded within the membrane is called FO and contains a ring of c subunits and the proton channel. The stalk and the ball-shaped headpiece is called F1 and is the site of ATP synthesis. The ball-shaped complex at the end of the F1 portion contains six proteins of two different kinds (three α subunits and three β subunits), whereas the "stalk" consists of one protein: the γ subunit, with the tip of the stalk extending into the ball of α and β subunits. Both the α and β subunits bind nucleotides, but only the β subunits catalyze the ATP synthesis reaction. Reaching along the side of the F1 portion and back into the membrane is a long rod-like subunit that anchors the α and β subunits into the base of the enzyme.
As protons cross the membrane through the channel in the base of ATP synthase, the FO proton-driven motor rotates. Rotation might be caused by changes in the ionization of amino acids in the ring of c subunits causing electrostatic interactions that propel the ring of c subunits past the proton channel. This rotating ring in turn drives the rotation of the central axle (the γ subunit stalk) within the α and β subunits. The α and β subunits are prevented from rotating themselves by the side-arm, which acts as a stator. This movement of the tip of the γ subunit within the ball of α and β subunits provides the energy for the active sites in the β subunits to undergo a cycle of movements that produces and then releases ATP.
This ATP synthesis reaction is called the binding change mechanism and involves the active site of a β subunit cycling between three states. In the "open" state, ADP and phosphate enter the active site (shown in brown in the diagram). The protein then closes up around the molecules and binds them loosely – the "loose" state (shown in red). The enzyme then changes shape again and forces these molecules together, with the active site in the resulting "tight" state (shown in pink) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state, releasing ATP and binding more ADP and phosphate, ready for the next cycle.
In some bacteria and archaea, ATP synthesis is driven by the movement of sodium ions through the cell membrane, rather than the movement of protons. Archaea such as Methanococcus also contain the A1Ao synthase, a form of the enzyme that contains additional proteins with little similarity in sequence to other bacterial and eukaryotic ATP synthase subunits. It is possible that, in some species, the A1Ao form of the enzyme is a specialized sodium-driven ATP synthase, but this might not be true in all cases.
Oxidative phosphorylation - energetics
The transport of electrons from redox pair NAD+/ NADH to the final redox pair 1/2 O2/ H2O can be summarized as
1/2 O2 + NADH + H+ → H2O + NAD+
The potential difference between these two redox pairs is 1.14 volt, which is equivalent to -52 kcal/mol or -2600 kJ per 6 mol of O2.
When one NADH is oxidized through the electron transfer chain, three ATPs are produced, which is equivalent to 7.3 kcal/mol x 3 = 21.9 kcal/mol.
The conservation of the energy can be calculated by the following formula
Efficiency = (21.9 x 100%) / 52 = 42%
So we can conclude that when NADH is oxidized, about 42% of energy is conserved in the form of three ATPs and the remaining (58%) energy is lost as heat (unless the chemical energy of ATP under physiological conditions was underestimated).
Reactive oxygen species
Molecular oxygen is a good terminal electron acceptor because it is a strong oxidizing agent. The reduction of oxygen does involve potentially harmful intermediates. Although the transfer of four electrons and four protons reduces oxygen to water, which is harmless, transfer of one or two electrons produces superoxide or peroxide anions, which are dangerously reactive.
These reactive oxygen species and their reaction products, such as the hydroxyl radical, are very harmful to cells, as they oxidize proteins and cause mutations in DNA. This cellular damage may contribute to disease and is proposed as one cause of aging.
The cytochrome c oxidase complex is highly efficient at reducing oxygen to water, and it releases very few partly reduced intermediates; however small amounts of superoxide anion and peroxide are produced by the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, as a highly reactive ubisemiquinone free radical is formed as an intermediate in the Q cycle. This unstable species can lead to electron "leakage" when electrons transfer directly to oxygen, forming superoxide. As the production of reactive oxygen species by these proton-pumping complexes is greatest at high membrane potentials, it has been proposed that mitochondria regulate their activity to maintain the membrane potential within a narrow range that balances ATP production against oxidant generation. For instance, oxidants can activate uncoupling proteins that reduce membrane potential.
To counteract these reactive oxygen species, cells contain numerous antioxidant systems, including antioxidant vitamins such as vitamin C and vitamin E, and antioxidant enzymes such as superoxide dismutase, catalase, and peroxidases, which detoxify the reactive species, limiting damage to the cell.
In hypoxic/anoxic conditions
As oxygen is fundamental for oxidative phosphorylation, a shortage in O2 level can alter ATP production rates. The proton motive force and ATP production can be maintained by intracellular acidosis. Cytosolic protons that have accumulated with ATP hydrolysis and lactic acidosis can freely diffuse across the mitochondrial outer-membrane and acidify the inter-membrane space, hence directly contributing to the proton motive force and ATP production.
When exposed to hypoxia/anoxia (no oxygen), most animals will see damage done to their mitochondria. From some species, these conditions can happen due to environmental variables, such as low tides, low temperatures, or general living conditions, like living in a hypoxic underground burrow. In humans, these conditions are commonly met in medical emergencies such as strokes, ischemia, and asphyxia.
Despite this, or perhaps due to it, some species have developed their own defense mechanisms against anoxia/hypoxia, as well as during reperfusion/reoxygenation. These mechanisms are diverse and differ between endotherms and ectotherms and can differ even at the species level.
Endotherms
Hypoxia/anoxia intolerance
Most mammals and birds are intolerant to low/no oxygen conditions. For the heart, in the absence of oxygen, the first four complexes of the electron transport chain decrease in activity. This will lead to protons leaking through the inner mitochondrial membrane without complexes I, III, and IV pushing protons back through to maintain the proton gradient. There is also electron leak (an event where electrons leak out of the electron transport chain), which happens because NADH dehydrogenase within Complex I becomes damaged, which allows for the production of ROS (reactive oxygen species) during ischemia. This will lead to the reversing of Complex V, which forces protons from the matrix back into the inner membrane space, against their concentration gradient. Forcing protons against their concentration gradient requires energy, so Complex V uses up ATP as an energy source.
Reoxygenation of intolerant animals
When oxygen re-enters the system, animals are faced with a different set of problems. Since ATP was used up during the anoxic period, it leads to a lack of ADP within the system. This is due to ADP's natural degradation into AMP, resulting in ADP being drained from the system. With no ADP in the system, Complex V is unable to start, meaning the protons will not flow through it to enter the matrix. Due to Complex V's reversal during anoxia, the proton gradient has become hyperpolarized (where the proton gradient is highly positively charged). Another factor in this problem is that succinate built up during anoxia, so when oxygen is reintroduced, succinate donates electrons to Complex II. The hyperpolarized gradient and succinate buildup leads to reverse electron transport, causing oxidative stress, which can lead to cellular damage and diseases.
Hypoxia/anoxia tolerance
The naked mole rat (Heterocephalus glaber) is a hypoxia-tolerant species that sleeps in deep burrows and in large colonies. The depth of these burrows reduces access to oxygen, and sleeping in large groups will deplete the area of oxygen quicker than usual, leading to hypoxia. The naked mole rat has the unique ability to survive low oxygen conditions for no less than several hours, and zero oxygen conditions for 18 minutes. One of the ways of combatting hypoxia in the brain is decreasing the reliance on oxygen for ATP production, achieved by decreased respiration rates and proton leak.
Reoxygenation of tolerant animals
Hypoxia/anoxia tolerant species handle ROS production during reoxygenation better than the intolerant. In the cortex of the naked mole rats, they show better homeostasis of ROS production than intolerant species and seem to lack the burst of ROS that typically comes with reoxygenation.
Ectotherms
Hypoxia/anoxia intolerance
Research on intolerant ectotherms is more limited than on tolerant ectotherms and intolerant endotherms, but it is shown that anoxia/hypoxia intolerance is different in terms for how long the intolerant survive as opposed to the tolerant between endotherms and ectotherms. While intolerant endotherms only last minutes, intolerant ectotherms can last hours, such as subtidal scallops (Argopecten irradians). This difference in intolerance could be due to a couple of different factors. One advantage is that the ectothermic inner mitochondrial membrane is less leaky, so less protons will leak through the inner membrane due to differences in the phospholipid bilayer composition. Another advantage ectotherms tend to have in this category is an ability for their mitochondria to properly function in a wide range of temperatures, such as the western fence lizard (Sceloporus occidentalis). While western fence lizards are not considered a hypoxia-tolerant animal, they still showed less temperature sensitivity in their mitochondria than mice mitochondria.
Reoxygenation of intolerant animals
While it is unclear how reoxygenation affects intolerant ectotherms at the mitochondrial level, there is some research showing how some of them respond. In the hypoxia-sensitive shovelnose ray (Aptychotrema rostrata), it is shown that ROS production is lower upon reoxygenation compared to rays only exposed to normoxia (normal oxygen levels). This differs from the hypoxia-sensitive endotherm, which would see an increase in ROS production. However, the ray's levels were still higher than the more hypoxia-tolerant Epaulette shark (Hemiscyllum ocellatum), which potentially sees hypoxia due to the bouts of low tides that can be seen in reef platforms. Subtidal scallops will see both a decrease in maximal respiration and a depolarization of the membrane during reoxygenation.
Hypoxia/anoxia tolerance
Hypoxia/Anoxia tolerant ectotherms have shown unique strategies for surviving anoxia. Pond turtles, such as the painted turtle (Chrysemys picta bellii), will experience anoxia during winter while they overwinter at the bottom of frozen ponds. In their cardiac mitochondria, the reversing of Complex V, the usage of ATP, and the build-up of succinate are all prevented during anoxia. Crucian carps (Carassius carassius) also overwinter in frozen ponds and show no loss membrane potential in their cardiac mitochondria during anoxia, but this relies on complexes I and III to be active.
Reoxygenation of tolerant animals
Pond turtles are able to completely avoid ROS production upon reoxygenation. However, crucian carp cannot and are unable to prevent the death of brain cells upon reoxygenation.
Inhibitors
There are several well-known drugs and toxins that inhibit oxidative phosphorylation. Although any one of these toxins inhibits only one enzyme in the electron transport chain, inhibition of any step in this process will halt the rest of the process. For example, if oligomycin inhibits ATP synthase, protons cannot pass back into the mitochondrion. As a result, the proton pumps are unable to operate, as the gradient becomes too strong for them to overcome. NADH is then no longer oxidized and the citric acid cycle ceases to operate because the concentration of NAD+ falls below the concentration that these enzymes can use.
Many site-specific inhibitors of the electron transport chain have contributed to the present knowledge of mitochondrial respiration. Synthesis of ATP is also dependent on the electron transport chain, so all site-specific inhibitors also inhibit ATP formation. The fish poison rotenone, the barbiturate drug amytal, and the antibiotic piericidin A inhibit NADH and coenzyme Q.
Carbon monoxide, cyanide, hydrogen sulphide and azide effectively inhibit cytochrome oxidase. Carbon monoxide reacts with the reduced form of the cytochrome while cyanide and azide react with the oxidised form. An antibiotic, antimycin A, and British anti-Lewisite, an antidote used against chemical weapons, are the two important inhibitors of the site between cytochrome B and C1.
Not all inhibitors of oxidative phosphorylation are toxins. In brown adipose tissue, regulated proton channels called uncoupling proteins can uncouple respiration from ATP synthesis. This rapid respiration produces heat, and is particularly important as a way of maintaining body temperature for hibernating animals, although these proteins may also have a more general function in cells' responses to stress.
History
The field of oxidative phosphorylation began with the report in 1906 by Arthur Harden of a vital role for phosphate in cellular fermentation, but initially only sugar phosphates were known to be involved. However, in the early 1940s, the link between the oxidation of sugars and the generation of ATP was firmly established by Herman Kalckar, confirming the central role of ATP in energy transfer that had been proposed by Fritz Albert Lipmann in 1941. Later, in 1949, Morris Friedkin and Albert L. Lehninger proved that the coenzyme NADH linked metabolic pathways such as the citric acid cycle and the synthesis of ATP. The term oxidative phosphorylation was coined by in 1939.
For another twenty years, the mechanism by which ATP is generated remained mysterious, with scientists searching for an elusive "high-energy intermediate" that would link oxidation and phosphorylation reactions. This puzzle was solved by Peter D. Mitchell with the publication of the chemiosmotic theory in 1961. At first, this proposal was highly controversial, but it was slowly accepted and Mitchell was awarded a Nobel prize in 1978. Subsequent research concentrated on purifying and characterizing the enzymes involved, with major contributions being made by David E. Green on the complexes of the electron-transport chain, as well as Efraim Racker on the ATP synthase. A critical step towards solving the mechanism of the ATP synthase was provided by Paul D. Boyer, by his development in 1973 of the "binding change" mechanism, followed by his radical proposal of rotational catalysis in 1982. More recent work has included structural studies on the enzymes involved in oxidative phosphorylation by John E. Walker, with Walker and Boyer being awarded a Nobel Prize in 1997.
See also
Respirometry
TIM/TOM Complex
Notes
References
Further reading
Introductory
Advanced
General resources
Animated diagrams illustrating oxidative phosphorylation Wiley and Co Concepts in Biochemistry
On-line biophysics lectures Antony Crofts, University of Illinois at Urbana–Champaign
ATP Synthase Graham Johnson
Structural resources
PDB molecule of the month:
ATP synthase
Cytochrome c
Cytochrome c oxidase
Interactive molecular models at Universidade Fernando Pessoa:
NADH dehydrogenase
succinate dehydrogenase
Coenzyme Q - cytochrome c reductase
cytochrome c oxidase
Cellular respiration
Integral membrane proteins
Metabolism
Redox | Oxidative phosphorylation | [
"Chemistry",
"Biology"
] | 8,870 | [
"Cellular respiration",
"Redox",
"Exercise biochemistry",
"Electrochemistry",
"Cellular processes",
"nan",
"Biochemistry",
"Metabolism"
] |
22,804 | https://en.wikipedia.org/wiki/Operational%20amplifier | An operational amplifier (often op amp or opamp) is a DC-coupled electronic voltage amplifier with a differential input, a (usually) single-ended output, and an extremely high gain. Its name comes from its original use of performing mathematical operations in analog computers.
By using negative feedback, an op amp circuit's characteristics (e.g. its gain, input and output impedance, bandwidth, and functionality) can be determined by external components and have little dependence on temperature coefficients or engineering tolerance in the op amp itself. This flexibility has made the op amp a popular building block in analog circuits.
Today, op amps are used widely in consumer, industrial, and scientific electronics. Many standard integrated circuit op amps cost only a few cents; however, some integrated or hybrid operational amplifiers with special performance specifications may cost over . Op amps may be packaged as components or used as elements of more complex integrated circuits.
The op amp is one type of differential amplifier. Other differential amplifier types include the fully differential amplifier (an op amp with a differential rather than single-ended output), the instrumentation amplifier (usually built from three op amps), the isolation amplifier (with galvanic isolation between input and output), and negative-feedback amplifier (usually built from one or more op amps and a resistive feedback network).
Operation
The amplifier's differential inputs consist of a non-inverting input (+) with voltage V+ and an inverting input (−) with voltage V−; ideally the op amp amplifies only the difference in voltage between the two, which is called the differential input voltage. The output voltage of the op amp Vout is given by the equation
where AOL is the open-loop gain of the amplifier (the term "open-loop" refers to the absence of an external feedback loop from the output to the input).
Open-loop amplifier
The magnitude of AOL is typically very large (100,000 or more for integrated circuit op amps, corresponding to +100 dB). Thus, even small microvolts of difference between V+ and V− may drive the amplifier into clipping or saturation. The magnitude of AOL is not well controlled by the manufacturing process, and so it is impractical to use an open-loop amplifier as a stand-alone differential amplifier.
Without negative feedback, and optionally positive feedback for regeneration, an open-loop op amp acts as a comparator, although comparator ICs are better suited. If the inverting input is held at ground (0 V), and the input voltage Vin applied to the non-inverting input is positive, the output will be maximum positive; if Vin is negative, the output will be maximum negative.
Closed-loop amplifier
If predictable operation is desired, negative feedback is used, by applying a portion of the output voltage to the inverting input. The closed-loop feedback greatly reduces the gain of the circuit. When negative feedback is used, the circuit's overall gain and response is determined primarily by the feedback network, rather than by the op-amp characteristics. If the feedback network is made of components with values small relative to the op amp's input impedance, the value of the op amp's open-loop response AOL does not seriously affect the circuit's performance. In this context, high input impedance at the input terminals and low output impedance at the output terminal(s) are particularly useful features of an op amp.
The response of the op-amp circuit with its input, output, and feedback circuits to an input is characterized mathematically by a transfer function; designing an op-amp circuit to have a desired transfer function is in the realm of electrical engineering. The transfer functions are important in most applications of op amps, such as in analog computers.
In the non-inverting amplifier on the right, the presence of negative feedback via the voltage divider Rf, Rg determines the closed-loop gain ACL = . Equilibrium will be established when Vout is just sufficient to pull the inverting input to the same voltage as Vin. The voltage gain of the entire circuit is thus . As a simple example, if Vin = 1 V and Rf = Rg, Vout will be 2 V, exactly the amount required to keep V− at 1 V. Because of the feedback provided by the Rf, Rg network, this is a closed-loop circuit.
Another way to analyze this circuit proceeds by making the following (usually valid) assumptions:
When an op amp operates in linear (i.e., not saturated) mode, the difference in voltage between the non-inverting (+) and inverting (−) pins is negligibly small.
The input impedance of the (+) and (−) pins is much larger than other resistances in the circuit.
The input signal Vin appears at both (+) and (−) pins per assumption 1, resulting in a current i through Rg equal to :
Since Kirchhoff's current law states that the same current must leave a node as enter it, and since the impedance into the (−) pin is near infinity per assumption 2, we can assume practically all of the same current i flows through Rf, creating an output voltage
By combining terms, we determine the closed-loop gain ACL:
Op-amp characteristics
Ideal op amps
An ideal op amp is usually considered to have the following characteristics:
Infinite open-loop gain G = vout / vin
Infinite input impedance Rin, and so zero input current
Zero input offset voltage
Infinite output voltage range
Infinite bandwidth with zero phase shift and infinite slew rate
Zero output impedance Rout, and so infinite output current range
Zero noise
Infinite common-mode rejection ratio (CMRR)
Infinite power supply rejection ratio.
These ideals can be summarized by the two :
In a closed loop the output does whatever is necessary to make the voltage difference between the inputs zero.
The inputs draw zero current.
The first rule only applies in the usual case where the op amp is used in a closed-loop design (negative feedback, where there is a signal path of some sort feeding back from the output to the inverting input). These rules are commonly used as a good first approximation for analyzing or designing op-amp circuits.
None of these ideals can be perfectly realized. A real op amp may be modeled with non-infinite or non-zero parameters using equivalent resistors and capacitors in the op-amp model. The designer can then include these effects into the overall performance of the final circuit. Some parameters may turn out to have negligible effect on the final design while others represent actual limitations of the final performance.
Real op amps
Real op amps differ from the ideal model in various aspects.
Finite gain
Open-loop gain is finite in real operational amplifiers. Typical devices exhibit open-loop DC gain exceeding 100,000. So long as the loop gain (i.e., the product of open-loop and feedback gains) is very large, the closed-loop gain will be determined entirely by the amount of negative feedback (i.e., it will be independent of open-loop gain). In applications where the closed-loop gain must be very high (approaching the open-loop gain), the feedback gain will be very low and the lower loop gain in these cases causes non-ideal behavior from the circuit.
Non-zero output impedance
Low output impedance is important for low-impedance loads; for these loads, the voltage drop across the output impedance effectively reduces the open-loop gain. In configurations with a voltage-sensing negative feedback, the output impedance of the amplifier is effectively lowered; thus, in linear applications, op-amp circuits usually exhibit a very low output impedance.
Low-impedance outputs typically require high quiescent (i.e., idle) current in the output stage and will dissipate more power, so low-power designs may purposely sacrifice low output impedance.
Finite input impedances
The differential input impedance of the operational amplifier is defined as the impedance between its two inputs; the common-mode input impedance is the impedance from each input to ground. MOSFET-input operational amplifiers often have protection circuits that effectively short circuit any input differences greater than a small threshold, so the input impedance can appear to be very low in some tests. However, as long as these operational amplifiers are used in a typical high-gain negative feedback application, these protection circuits will be inactive. The input bias and leakage currents described below are a more important design parameter for typical operational amplifier applications.
Input capacitance
Additional input impedance due to parasitic capacitance can be a critical issue for high-frequency operation where it reduces input impedance and may cause phase shifts.
Input current
Due to biasing requirements or leakage, a small amount of current flows into the inputs. When high resistances or sources with high output impedances are used in the circuit, these small currents can produce significant voltage drops. If the input currents are matched, and the impedance looking out of both inputs are matched, then those voltages at each input will be equal. Because the operational amplifier operates on the difference between its inputs, these matched voltages will have no effect. It is more common for the input currents to be slightly mismatched. The difference is called input offset current, and even with matched resistances a small offset voltage (distinct from the input offset voltage below) can be produced. This offset voltage can create offsets or drifting in the operational amplifier.
Input offset voltage
Input offset voltage is a voltage required across the op amp's input terminals to drive the output voltage to zero. In the perfect amplifier, there would be no input offset voltage. However, it exists because of imperfections in the differential amplifier input stage of op amps. Input offset voltage creates two problems: First, due to the amplifier's high voltage gain, it virtually assures that the amplifier output will go into saturation if it is operated without negative feedback, even when the input terminals are wired together. Second, in a closed loop, negative feedback configuration, the input offset voltage is amplified along with the signal and this may pose a problem if high precision DC amplification is required or if the input signal is very small.
Common-mode gain
A perfect operational amplifier amplifies only the voltage difference between its two inputs, completely rejecting all voltages that are common to both. However, the differential input stage of an operational amplifier is never perfect, leading to the amplification of these common voltages to some degree. The standard measure of this defect is called the common-mode rejection ratio (CMRR). Minimization of common-mode gain is important in non-inverting amplifiers that operate at high gain.
Power-supply rejection
The output of a perfect operational amplifier will be independent of power supply voltage fluctuations. Every real operational amplifier has a finite power supply rejection ratio (PSRR) that reflects how well the op amp can reject noise in its power supply from propagating to the output. With increasing frequency the power-supply rejection usually gets worse.
Temperature effects
Performance and properties of the amplifier typically changes, to some extent, with changes in temperature. Temperature drift of the input offset voltage is especially important.
Drift
Real op-amp parameters are subject to slow change over time and with changes in temperature, input conditions, etc.
Finite bandwidth
All amplifiers have finite bandwidth. To a first approximation, the op amp has the frequency response of an integrator with gain. That is, the gain of a typical op amp is inversely proportional to frequency and is characterized by its gain–bandwidth product (GBWP). For example, an op amp with a GBWP of 1 MHz would have a gain of 5 at 200 kHz, and a gain of 1 at 1 MHz. This dynamic response coupled with the very high DC gain of the op amp gives it the characteristics of a first-order low-pass filter with very high DC gain and low cutoff frequency given by the GBWP divided by the DC gain.The finite bandwidth of an op amp can be the source of several problems, including:Typical low-cost, general-purpose op amps exhibit a GBWP of a few megahertz. Specialty and high-speed op amps exist that can achieve a GBWP of hundreds of megahertz. For very high-frequency circuits, a current-feedback operational amplifier is often used.
Noise
Amplifiers intrinsically output noise, even when there is no signal applied. This can be due to internal thermal noise and flicker noise of the device. For applications with high gain or high bandwidth, noise becomes an important consideration and a low-noise amplifier, which is specifically designed for minimum intrinsic noise, may be required to meet performance requirements.
Non-linear imperfections
Saturation
Output voltage is limited to a minimum and maximum value close to the power supply voltages. The output of older op amps can reach to within one or two volts of the supply rails. The output of so-called op amps can reach to within millivolts of the supply rails when providing low output currents.
Slew rate limiting
The amplifier's output voltage reaches its maximum rate of change, the slew rate, usually specified in volts per microsecond (V/μs). When slew rate limiting occurs, further increases in the input signal have no effect on the rate of change of the output. Slew rate limiting is usually caused by the input stage saturating; the result is a constant current driving a capacitance in the amplifier (especially those capacitances used to implement its frequency compensation); the slew rate is limited by . Slewing is associated with the large-signal performance of an op amp. Consider, for example, an op amp configured for a gain of 10. Let the input be a 1V, 100 kHz sawtooth wave. That is, the amplitude is 1V and the period is 10 microseconds. Accordingly, the rate of change (i.e., the slope) of the input is 0.1 V per microsecond. After 10× amplification, the output should be a 10V, 100 kHz sawtooth, with a corresponding slew rate of 1V per microsecond. However, the classic 741 op amp has a 0.5V per microsecond slew rate specification so that its output can rise to no more than 5V in the sawtooth's 10-microsecond period. Thus, if one were to measure the output, it would be a 5V, 100 kHz sawtooth, rather than a 10V, 100 kHz sawtooth.Next consider the same amplifier and 100 kHz sawtooth, but now the input amplitude is 100mV rather than 1V. After 10× amplification the output is a 1V, 100 kHz sawtooth with a corresponding slew rate of 0.1V per microsecond. In this instance, the 741 with its 0.5V per microsecond slew rate will amplify the input properly. Modern high-speed op amps can have slew rates in excess of 5,000V per microsecond. However, it is more common for op amps to have slew rates in the range 5–100V per microsecond. For example, the general purpose TL081 op amp has a slew rate of 13V per microsecond. As a general rule, low power and small bandwidth op amps have low slew rates. As an example, the LT1494 micropower op amp consumes 1.5 microamp but has a 2.7 kHz gain-bandwidth product and a 0.001V per microsecond slew rate.
Non-linear input-output relationship
The output voltage may not be accurately proportional to the difference between the input voltages producing distortion. This effect will be very small in a practical circuit where substantial negative feedback is used.
Phase reversal
In some integrated op amps, when the published common mode voltage is violated (e.g., by one of the inputs being driven to one of the supply voltages), the output may slew to the opposite polarity from what is expected in normal operation. Under such conditions, negative feedback becomes positive, likely causing the circuit to lock up in that state.
Power considerations
Limited output current
The output current must be finite. In practice, most op amps are designed to limit the output current to prevent damage to the device, typically around 25 mA for a type 741 IC op amp. Modern designs are electronically more robust than earlier implementations and some can sustain direct short circuits on their outputs without damage.
Limited output voltage
Output voltage cannot exceed the power supply voltage supplied to the op amp. The maximum output of most op amps is further reduced by some amount due to limitations in the output circuitry. Rail-to-rail op amps are designed for maximum output levels.
Output sink current
The output sink current is the maximum current allowed to sink into the output stage. Some manufacturers provide an output voltage vs. the output sink current plot which gives an idea of the output voltage when it is sinking current from another source into the output pin.
Limited dissipated power
The output current flows through the op amp's internal output impedance, generating heat that must be dissipated. If the op amp dissipates too much power, then its temperature will increase above some safe limit. The op amp must shut down or risk being damaged.
Modern integrated FET or MOSFET op amps approximate more closely the ideal op amp than bipolar ICs when it comes to input impedance and input bias currents. Bipolars are generally better when it comes to input voltage offset, and often have lower noise. Generally, at room temperature, with a fairly large signal, and limited bandwidth, FET and MOSFET op amps now offer better performance.
Internal circuitry of -type op amp
Sourced by many manufacturers, and in multiple similar products, an example of a bipolar transistor operational amplifier is the 741 integrated circuit designed in 1968 by David Fullagar at Fairchild Semiconductor after Bob Widlar's LM301 integrated circuit design.
In this discussion, we use the parameters of the hybrid-pi model to characterize the small-signal, grounded emitter characteristics of a transistor. In this model, the current gain of a transistor is denoted hfe, more commonly called the β.
Architecture
A small-scale integrated circuit, the 741 op amp shares with most op amps an internal structure consisting of three gain stages:
Differential amplifier (outlined dark blue) — provides high differential amplification (gain), with rejection of common-mode signal, low noise, high input impedance, and drives a
Voltage amplifier (outlined magenta) — provides high voltage gain, a single-pole frequency roll-off, and in turn drives the
Output amplifier (outlined cyan and green) — provides high current gain (low output impedance), along with output current limiting, and output short-circuit protection.
Additionally, it contains current mirror (outlined red) bias circuitry and compensation capacitor (30 pF).
Differential amplifier
The input stage consists of a cascaded differential amplifier (outlined in dark blue) followed by a current-mirror active load. This constitutes a transconductance amplifier, turning a differential voltage signal at the bases of Q1, Q2 into a current signal into the base of Q15.
It entails two cascaded transistor pairs, satisfying conflicting requirements. The first stage consists of the matched NPN emitter follower pair Q1, Q2 that provide high input impedance. The second is the matched PNP common-base pair Q3, Q4 that eliminates the undesirable Miller effect; it drives an active load Q7 plus matched pair Q5, Q6.
That active load is implemented as a modified Wilson current mirror; its role is to convert the (differential) input current signal to a single-ended signal without the attendant 50% losses (increasing the op amp's open-loop gain by 3 dB). Thus, a small-signal differential current in Q3 versus Q4 appears summed (doubled) at the base of Q15, the input of the voltage gain stage.
Voltage amplifier
The (class-A) voltage gain stage (outlined in magenta) consists of the two NPN transistors Q15 and Q19 connected in a Darlington configuration and uses the output side of current mirror formed by Q12 and Q13 as its collector (dynamic) load to achieve its high voltage gain. The output sink transistor Q20 receives its base drive from the common collectors of Q15 and Q19; the level-shifter Q16 provides base drive for the output source transistor Q14. The transistor Q22 prevents this stage from delivering excessive current to Q20 and thus limits the output sink current.
Output amplifier
The output stage (Q14, Q20, outlined in cyan) is a Class AB amplifier. It provides an output drive with impedance of ~50Ω, in essence, current gain. Transistor Q16 (outlined in green) provides the quiescent current for the output transistors and Q17 limits output source current.
Biasing circuits
Biasing circuits provide appropriate quiescent current for each stage of the op amp.
The resistor (39 kΩ) connecting the (diode-connected) Q11 and Q12, and the given supply voltage (VS+ − VS−), determine the current in the current mirrors, (matched pairs) Q10/Q11 and Q12/Q13. The collector current of Q11, i11 × 39 kΩ = VS+ − VS− − 2 VBE. For the typical VS = ±20 V, the standing current in Q11 and Q12 (as well as in Q13) would be ~1 mA. A supply current for a typical 741 of about 2 mA agrees with the notion that these two bias currents dominate the quiescent supply current.
Transistors Q11 and Q10 form a Widlar current mirror, with quiescent current in Q10 i10 such that ln(i11 / i10) = i10 × 5 kΩ / 28 mV, where 5 kΩ represents the emitter resistor of Q10, and 28 mV is VT, the thermal voltage at room temperature. In this case i10 ≈ 20 μA.
Differential amplifier
The biasing circuit of this stage is set by a feedback loop that forces the collector currents of Q10 and Q9 to (nearly) match. Any small difference in these currents provides drive for the common base of Q3 and Q4. The summed quiescent currents through Q1 and Q3 plus Q2 and Q4 is mirrored from Q8 into Q9, where it is summed with the collector current in Q10, the result being applied to the bases of Q3 and Q4.
The quiescent currents through Q1 and Q3 (also Q2 and Q4) i1 will thus be half of i10, of order ~10 μA. Input bias current for the base of Q1 (also Q2) will amount to i1 / β; typically ~50 nA, implying a current gain hfe ≈ 200 for Q1 (also Q2).
This feedback circuit tends to draw the common base node of Q3/Q4 to a voltage Vcom − 2 VBE, where Vcom is the input common-mode voltage. At the same time, the magnitude of the quiescent current is relatively insensitive to the characteristics of the components Q1–Q4, such as hfe, that would otherwise cause temperature dependence or part-to-part variations.
Transistor Q7 drives Q5 and Q6 into conduction until their (equal) collector currents match that of Q1/Q3 and Q2/Q4. The quiescent current in Q7 is VBE / 50 kΩ, about 35 μA, as is the quiescent current in Q15, with its matching operating point. Thus, the quiescent currents are pairwise matched in Q1/Q2, Q3/Q4, Q5/Q6, and Q7/Q15.
Voltage amplifier
Quiescent currents in Q16 and Q19 are set by the current mirror Q12/Q13, which is running at ~1 mA. The collector current in Q19 tracks that standing current.
Output amplifier
In the circuit involving Q16 (variously named rubber diode or VBE multiplier), the 4.5 kΩ resistor must be conducting about 100 μA, with Q16 VBE roughly 700 mV. Then VCB must be about 0.45 V and VCE at about 1.0 V. Because the Q16 collector is driven by a current source and the Q16 emitter drives into the Q19 collector current sink, the Q16 transistor establishes a voltage difference between the Q14 base and the Q20 base of ~1 V, regardless of the common-mode voltage of Q14/Q20 bases. The standing current in Q14/Q20 will be a factor exp(100 mV mm/ VT) ≈ 36 smaller than the 1 mA quiescent current in the class A portion of the op amp. This (small) standing current in the output transistors establishes the output stage in class AB operation and reduces the crossover distortion of this stage.
Small-signal differential mode
A small differential input voltage signal gives rise, through multiple stages of current amplification, to a much larger voltage signal on output.
Input impedance
The input stage with Q1 and Q3 is similar to an emitter-coupled pair (long-tailed pair), with Q2 and Q4 adding some degenerating impedance. The input impedance is relatively high because of the small current through Q1-Q4. A typical 741 op amp has a differential input impedance of about 2 MΩ. The common mode input impedance is even higher, as the input stage works at an essentially constant current.
Differential amplifier
A differential voltage Vin at the op amp inputs (pins 3 and 2, respectively) gives rise to a small differential current in the bases of Q1 and Q2 iin ≈ Vin / (2hiehfe). This differential base current causes a change in the differential collector current in each leg by iinhfe. Introducing the transconductance of Q1, gm = hfe / hie, the (small-signal) current at the base of Q15 (the input of the voltage gain stage) is Vingm / 2.
This portion of the op amp cleverly changes a differential signal at the op amp inputs to a single-ended signal at the base of Q15, and in a way that avoids wastefully discarding the signal in either leg. To see how, notice that a small negative change in voltage at the inverting input (Q2 base) drives it out of conduction, and this incremental decrease in current passes directly from Q4 collector to its emitter, resulting in a decrease in base drive for Q15. On the other hand, a small positive change in voltage at the non-inverting input (Q1 base) drives this transistor into conduction, reflected in an increase in current at the collector of Q3. This current drives Q7 further into conduction, which turns on current mirror Q5/Q6. Thus, the increase in Q3 emitter current is mirrored in an increase in Q6 collector current; the increased collector currents shunts more from the collector node and results in a decrease in base drive current for Q15. Besides avoiding wasting 3 dB of gain here, this technique decreases common-mode gain and feedthrough of power supply noise.
Voltage amplifier
A current signal i at Q15's base gives rise to a current in Q19 of order iβ2 (the product of the hfe of each of Q15 and Q19, which are connected in a Darlington pair). This current signal develops a voltage at the bases of output transistors Q14 and Q20 proportional to the hie of the respective transistor.
Output amplifier
Output transistors Q14 and Q20 are each configured as an emitter follower, so no voltage gain occurs there; instead, this stage provides current gain, equal to the hfe of Q14 and Q20.
The current gain lowers the output impedance and although the output impedance is not zero, as it would be in an ideal op amp, with negative feedback it approaches zero at low frequencies.
Other linear characteristics
Overall open-loop gain
The net open-loop small-signal voltage gain of the op amp is determined by the product of the current gain hfe of some 4 transistors. In practice, the voltage gain for a typical 741-style op amp is of order 200,000, and the current gain, the ratio of input impedance (~2−6MΩ) to output impedance (~50Ω) provides yet more (power) gain.
Small-signal common mode gain
The ideal op amp has infinite common-mode rejection ratio, or zero common-mode gain.
In the present circuit, if the input voltages change in the same direction, the negative feedback makes Q3/Q4 base voltage follow (with 2 VBE below) the input voltage variations. Now the output part (Q10) of Q10-Q11 current mirror keeps up the common current through Q9/Q8 constant in spite of varying voltage. Q3/Q4 collector currents, and accordingly the output current at the base of Q15, remain unchanged.
In the typical 741 op amp, the common-mode rejection ratio is 90 dB, implying an open-loop common-mode voltage gain of about 6.
Frequency compensation
The innovation of the Fairchild μA741 was the introduction of frequency compensation via an on-chip (monolithic) capacitor, simplifying application of the op amp by eliminating the need for external components for this function. The 30 pF capacitor stabilizes the amplifier via Miller compensation and functions in a manner similar to an op-amp integrator circuit. Also known as dominant pole compensation because it introduces a pole that masks (dominates) the effects of other poles into the open loop frequency response; in a 741 op amp this pole can be as low as 10 Hz (where it causes a −3 dB loss of open loop voltage gain).
This internal compensation is provided to achieve unconditional stability of the amplifier in negative feedback configurations where the feedback network is non-reactive and the loop gain is unity or higher. In contrast, amplifiers requiring external compensation, such as the μA748, may require external compensation or closed-loop gains significantly higher than unity.
Input offset voltage
The offset null pins may be used to place external resistors (typically in the form of the two ends of a potentiometer, with the slider connected to VS–) in parallel with the emitter resistors of Q5 and Q6, to adjust the balance of the Q5/Q6 current mirror. The potentiometer is adjusted such that the output is null (midrange) when the inputs are shorted together.
Non-linear characteristics
Input breakdown voltage
The transistors Q3, Q4 help to increase the reverse VBE rating; The base-emitter junctions of the NPN transistors Q1 and Q2 break down at around 7V, but the PNP transistors Q3 and Q4 have VBE breakdown voltages around 50V.
Output-stage voltage swing and current limiting
Variations in the quiescent current with temperature, or due to manufacturing variations, are common, so crossover distortion may be subject to significant variation.
The output range of the amplifier is about one volt less than the supply voltage, owing in part to VBE of the output transistors Q14 and Q20.
The resistor at the Q14 emitter, along with Q17, limits Q14 current to about ; otherwise, Q17 conducts no current. Current limiting for Q20 is performed in the voltage gain stage: Q22 senses the voltage across Q19's emitter resistor (); as it turns on, it diminishes the drive current to Q15 base. Later versions of this amplifier schematic may show a somewhat different method of output current limiting.
Applicability considerations
While the 741 was historically used in audio and other sensitive equipment, such use is now rare because of the improved noise performance of more modern op amps. Apart from generating noticeable hiss, 741s and other older op amps may have poor common-mode rejection ratios and so will often introduce cable-borne mains hum and other common-mode interference, such as switch clicks, into sensitive equipment.
The 741 has come to often mean a generic op-amp IC (such as μA741, LM301, 558, LM324, TBA221 — or a more modern replacement such as the TL071). The description of the 741 output stage is qualitatively similar for many other designs (that may have quite different input stages), except:
Some devices (μA748, LM301, LM308) are not internally compensated (require an external capacitor from output to some point within the operational amplifier, if used in low closed-loop gain applications).
Some modern devices have rail-to-rail output capability, meaning that the output can range from within a few millivolts of the positive supply voltage to within a few millivolts of the negative supply voltage.
Classification
Op amps may be classified by their construction:
discrete, built from individual transistors or tubes/valves,
hybrid, consisting of discrete and integrated components,
full integrated circuits — most common, having displaced the former two due to low cost.
IC op amps may be classified in many ways, including:
Device grade, including acceptable operating temperature ranges and other environmental or quality factors. For example: LM101, LM201, and LM301 refer to the military, industrial, and commercial versions of the same component. Military and industrial-grade components offer better performance in harsh conditions than their commercial counterparts but are sold at higher prices.
Classification by package type may also affect environmental hardiness, as well as manufacturing options; DIP, and other through-hole packages are tending to be replaced by surface-mount devices.
Classification by internal compensation: op amps may suffer from high frequency instability in some negative feedback circuits unless a small compensation capacitor modifies the phase and frequency responses. Op amps with a built-in capacitor are termed compensated, and allow circuits above some specified closed-loop gain to be stable with no external capacitor. In particular, op amps that are stable even with a closed loop gain of 1 are called unity gain compensated.
Single, dual and quad versions of many commercial op-amp IC are available, meaning 1, 2 or 4 operational amplifiers are included in the same package.
Rail-to-rail input (and/or output) op amps can work with input (and/or output) signals very close to the power supply rails.
CMOS op amps (such as the CA3140E) provide extremely high input resistances, higher than JFET-input op amps, which are normally higher than bipolar-input op amps.
Programmable op amps allow the quiescent current, bandwidth and so on to be adjusted by an external resistor.
Manufacturers often market their op amps according to purpose, such as low-noise pre-amplifiers, wide bandwidth amplifiers, and so on.
Applications
Use in electronics system design
The use of op amps as circuit blocks is much easier and clearer than specifying all their individual circuit elements (transistors, resistors, etc.), whether the amplifiers used are integrated or discrete circuits. In the first approximation op amps can be used as if they were ideal differential gain blocks; at a later stage, limits can be placed on the acceptable range of parameters for each op amp.
Circuit design follows the same lines for all electronic circuits. A specification is drawn up governing what the circuit is required to do, with allowable limits. For example, the gain may be required to be 100 times, with a tolerance of 5% but drift of less than 1% in a specified temperature range; the input impedance not less than one megohm; etc.
A basic circuit is designed, often with the help of electronic circuit simulation. Specific commercially available op amps and other components are then chosen that meet the design criteria within the specified tolerances at acceptable cost. If not all criteria can be met, the specification may need to be modified.
A prototype is then built and tested; additional changes to meet or improve the specification, alter functionality, or reduce the cost, may be made.
Applications without feedback
Without feedback, the op amp may be used as a voltage comparator. Note that a device designed primarily as a comparator may be better if, for instance, speed is important or a wide range of input voltages may be found since such devices can quickly recover from full-on or full-off saturated states.
A voltage level detector can be obtained if a reference voltage Vref is applied to one of the op amp's inputs. This means that the op amp is set up as a comparator to detect a positive voltage. If the voltage to be sensed, Ei, is applied to op amp's (+) input, the result is a noninverting positive-level detector: when Ei is above Vref, VO equals +Vsat; when Ei is below Vref, VO equals −Vsat. If Ei is applied to the inverting input, the circuit is an inverting positive-level detector: When Ei is above Vref, VO equals −Vsat.
A zero voltage level detector (Ei = 0) can convert, for example, the output of a sine-wave from a function generator into a variable-frequency square wave. If Ei is a sine wave, triangular wave, or wave of any other shape that is symmetrical around zero, the zero-crossing detector's output will be square. Zero-crossing detection may also be useful in triggering TRIACs at the best time to reduce mains interference and current spikes.
Positive-feedback applications
Another typical configuration of op amps is with positive feedback, which takes a fraction of the output signal back to the non-inverting input. An important application of positive feedback is the comparator with hysteresis, the Schmitt trigger.
Some circuits may use positive feedback and negative feedback around the same amplifier, for example triangle-wave oscillators and active filters.
Negative-feedback applications
Non-inverting amplifier
In a non-inverting amplifier, the output voltage changes in the same direction as the input voltage.
The gain equation for the op amp is
However, in this circuit V− is a function of Vout because of the negative feedback through the R1 R2 network. R1 and R2 form a voltage divider, and as V− is a high-impedance input, it does not load it appreciably. Consequently
where
Substituting this into the gain equation, we obtain
Solving for :
If is very large, this simplifies to
The non-inverting input of the operational amplifier needs a path for DC to ground; if the signal source does not supply a DC path, or if that source requires a given load impedance, then the circuit will require another resistor from the non-inverting input to ground. When the operational amplifier's input bias currents are significant, then the DC source resistances driving the inputs should be balanced. The ideal value for the feedback resistors (to give minimal offset voltage) will be such that the two resistances in parallel roughly equal the resistance to ground at the non-inverting input pin. That ideal value assumes the bias currents are well matched, which may not be true for all op amps.
Inverting amplifier
In an inverting amplifier, the output voltage changes in an opposite direction to the input voltage.
As with the non-inverting amplifier, we start with the gain equation of the op amp:
This time, V− is a function of both Vout and Vin due to the voltage divider formed by Rf and Rin. Again, the op-amp input does not apply an appreciable load, so
Substituting this into the gain equation and solving for :
If is very large, this simplifies to
A resistor is often inserted between the non-inverting input and ground (so both inputs see similar resistances), reducing the input offset voltage due to different voltage drops due to bias current, and may reduce distortion in some op amps.
A DC-blocking capacitor may be inserted in series with the input resistor when a frequency response down to DC is not needed and any DC voltage on the input is unwanted. That is, the capacitive component of the input impedance inserts a DC zero and a low-frequency pole that gives the circuit a bandpass or high-pass characteristic.
The potentials at the operational amplifier inputs remain virtually constant (near ground) in the inverting configuration. The constant operating potential typically results in distortion levels that are lower than those attainable with the non-inverting topology.
Other applications
audio- and video-frequency pre-amplifiers and buffers
differential amplifiers
differentiators and integrators
filters
precision rectifiers
precision peak detectors
voltage and current regulators
analog calculators
analog-to-digital converters
digital-to-analog converters
voltage clamping
oscillators and waveform generators
clipper
clamper (dc inserter or restorer)
LOG and ANTILOG amplifiers
Most single, dual and quad op amps available have a standardized pin-out which permits one type to be substituted for another without wiring changes. A specific op amp may be chosen for its open loop gain, bandwidth, noise performance, input impedance, power consumption, or a compromise between any of these factors.
Historical timeline
1941: A vacuum tube op amp. An op amp, defined as a general-purpose, DC-coupled, high gain, inverting feedback amplifier, is first found in "Summing Amplifier" filed by Karl D. Swartzel Jr. of Bell Labs in 1941. This design used three vacuum tubes to achieve a gain of and operated on voltage rails of . It had a single inverting input rather than differential inverting and non-inverting inputs, as are common in today's op amps. Throughout World War II, Swartzel's design proved its value by being liberally used in the M9 artillery director designed at Bell Labs. This artillery director worked with the SCR584 radar system to achieve extraordinary hit rates (near 90%) that would not have been possible otherwise.
1947: An op amp with an explicit non-inverting input. In 1947, the operational amplifier was first formally defined and named in a paper by John R. Ragazzini of Columbia University. In this same paper a footnote mentioned an op-amp design by a student that would turn out to be quite significant. This op amp, designed by Loebe Julie, was superior in a variety of ways. It had two major innovations. Its input stage used a long-tailed triode pair with loads matched to reduce drift in the output and, far more importantly, it was the first op-amp design to have two inputs (one inverting, the other non-inverting). The differential input made a whole range of new functionality possible, but it would not be used for a long time due to the rise of the chopper-stabilized amplifier.
1949: A chopper-stabilized op amp. In 1949, Edwin A. Goldberg designed a chopper-stabilized op amp. This set-up uses a normal op amp with an additional AC amplifier that goes alongside the op amp. The chopper gets an AC signal from DC by switching between the DC voltage and ground at a fast rate (60 Hz or 400 Hz). This signal is then amplified, rectified, filtered and fed into the op amp's non-inverting input. This vastly improved the gain of the op amp while significantly reducing the output drift and DC offset. Unfortunately, any design that used a chopper couldn't use their non-inverting input for any other purpose. Nevertheless, the much improved characteristics of the chopper-stabilized op amp made it the dominant way to use op amps. Techniques that used the non-inverting input regularly would not be very popular until the 1960s when op-amp ICs started to show up in the field.
1953: A commercially available op amp. In 1953, vacuum tube op amps became commercially available with the release of the model K2-W from George A. Philbrick Researches, Incorporated. The designation on the devices shown, GAP/R, is an acronym for the complete company name. Two nine-pin 12AX7 vacuum tubes were mounted in an octal package and had a model K2-P chopper add-on available that would effectively "use up" the non-inverting input. This op amp was based on a descendant of Loebe Julie's 1947 design and, along with its successors, would start the widespread use of op amps in industry.
1961: A discrete IC op amp. With the birth of the transistor in 1947, and the silicon transistor in 1954, the concept of ICs became a reality. The introduction of the planar process in 1959 made transistors and ICs stable enough to be commercially useful. By 1961, solid-state, discrete op amps were being produced. These op amps were effectively small circuit boards with packages such as edge connectors. They usually had hand-selected resistors in order to improve things such as voltage offset and drift. The P45 (1961) had a gain of 94 dB and ran on ±15 V rails. It was intended to deal with signals in the range of .
1961: A varactor bridge op amp. There have been many different directions taken in op-amp design. Varactor bridge op amps started to be produced in the early 1960s. They were designed to have extremely small input current and are still amongst the best op amps available in terms of common-mode rejection with the ability to correctly deal with hundreds of volts at their inputs.
1962: An op amp in a potted module. By 1962, several companies were producing modular potted packages that could be plugged into printed circuit boards. These packages were crucially important as they made the operational amplifier into a single black box which could be easily treated as a component in a larger circuit.
1963: A monolithic IC op amp. In 1963, the first monolithic IC op amp, the μA702 designed by Bob Widlar at Fairchild Semiconductor, was released. Monolithic ICs consist of a single chip as opposed to a chip and discrete parts (a discrete IC) or multiple chips bonded and connected on a circuit board (a hybrid IC). Almost all modern op amps are monolithic ICs; however, this first IC did not meet with much success. Issues such as an uneven supply voltage, low gain and a small dynamic range held off the dominance of monolithic op amps until 1965 when the μA709 (also designed by Bob Widlar) was released.
1968: Release of the μA741. The popularity of monolithic op amps was further improved upon the release of the LM101 in 1967, which solved a variety of issues, and the subsequent release of the μA741 in 1968. The μA741 was extremely similar to the LM101 except that Fairchild's facilities allowed them to include a 30 pF compensation capacitor inside the chip instead of requiring external compensation. This simple difference has made the 741 the canonical op amp and many modern amps base their pinout on the 741s. The μA741 is still in production, and has become ubiquitous in electronics—many manufacturers produce a version of this classic chip, recognizable by part numbers containing 741. The same part is manufactured by several companies.
1970: First high-speed, low-input current FET design.
In the 1970s high speed, low-input current designs started to be made by using FETs. These would be largely replaced by op amps made with MOSFETs in the 1980s.
1972: Single sided supply op amps being produced. A single sided supply op amp is one where the input and output voltages can be as low as the negative power supply voltage instead of needing to be at least two volts above it. The result is that it can operate in many applications with the negative supply pin on the op amp being connected to the signal ground, thus eliminating the need for a separate negative power supply.
The LM324 (released in 1972) was one such op amp that came in a quad package (four separate op amps in one package) and became an industry standard. In addition to packaging multiple op amps in a single package, the 1970s also saw the birth of op amps in hybrid packages. These op amps were generally improved versions of existing monolithic op amps. As the properties of monolithic op amps improved, the more complex hybrid ICs were quickly relegated to systems that are required to have extremely long service lives or other specialty systems.
Recent trends. Recently supply voltages in analog circuits have decreased (as they have in digital logic) and low-voltage op amps have been introduced reflecting this. Supplies of 5 V and increasingly 3.3 V (sometimes as low as 1.8 V) are common. To maximize the signal range modern op amps commonly have rail-to-rail output (the output signal can range from the lowest supply voltage to the highest) and sometimes rail-to-rail inputs.
See also
Active filter
Analog computer
Bob Widlar
Current conveyor
Current-feedback operational amplifier
Differential amplifier
George A. Philbrick
Instrumentation amplifier
List of LM-series integrated circuits
Negative feedback amplifier
Op-amp swapping
Operational amplifier applications
Operational transconductance amplifier
Sallen–Key topology
Notes
References
Further reading
Books
Op Amps For Everyone; 5th Ed; Bruce Carter, Ron Mancini; Newnes; 484 pages; 2017; . (2 MB PDF - 1st edition)
Operational Amplifiers - Theory and Design; 3rd Ed; Johan Huijsing; Springer; 423 pages; 2017; .
Operational Amplifiers and Linear Integrated Circuits - Theory and Application; 3rd Ed; James Fiore; Creative Commons; 589 pages; 2016.(13 MB PDF Text)(2 MB PDF Lab)
Analysis and Design of Linear Circuits; 8th Ed; Roland Thomas, Albert Rosa, Gregory Toussaint; Wiley; 912 pages; 2016; .
Design with Operational Amplifiers and Analog Integrated Circuits; 4th Ed; Sergio Franco; McGraw Hill; 672 pages; 2015; .
Small Signal Audio Design; 2nd Ed; Douglas Self; Focal Press; 780 pages; 2014; .
Linear Circuit Design Handbook; 1st Ed; Hank Zumbahlen; Newnes; 960 pages; 2008; . (35 MB PDF)
Op Amp Applications Handbook; 1st Ed; Walt Jung; Analog Devices & Newnes; 896 pages; 2005; . (17 MB PDF)
Operational Amplifiers and Linear Integrated Circuits; 6th Ed; Robert Coughlin, Frederick Driscoll; Prentice Hall; 529 pages; 2001; .
Active-Filter Cookbook; 2nd Ed; Don Lancaster; Sams; 240 pages; 1996; . (28 MB PDF - 1st edition)
IC Op-Amp Cookbook; 3rd Ed; Walt Jung; Prentice Hall; 433 pages; 1986; . (18 MB PDF - 1st edition)
Engineer's Mini-Notebook – OpAmp IC Circuits; 1st Ed; Forrest Mims III; Radio Shack; 49 pages; 1985; ASIN B000DZG196. (4 MB PDF)
Designing with Operational Amplifiers - Applications Alternatives; 1st Ed; Jerald Graeme; Burr-Brown & McGraw Hill; 269 pages; 1976; .
Applications of Operational Amplifiers - Third Generation Techniques; 1st Ed; Jerald Graeme; Burr-Brown & McGraw Hill; 233 pages; 1973; . (37 MB PDF)
Understanding IC Operational Amplifiers; 1st Ed; Roger Melen and Harry Garland; Sams Publishing; 128 pages; 1971; . (archive)
Operational Amplifiers - Design and Applications; 1st Ed; Jerald Graeme, Gene Tobey, Lawrence Huelsman; Burr-Brown & McGraw Hill; 473 pages; 1971; .
Books with opamp chapters
Learning the Art of Electronics - A Hands-On Lab Course; 1st Ed; Thomas Hayes, Paul Horowitz; Cambridge; 1150 pages; 2016; . (Part 3 is 268 pages)
The Art of Electronics; 3rd Ed; Paul Horowitz, Winfield Hill; Cambridge; 1220 pages; 2015; . (Chapter 4 is 69 pages)
Lessons in Electric Circuits - Volume III - Semiconductors; 5th Ed; Tony Kuphaldt; Open Book Project; 528 page; 2009. (Chapter 8 is 59 pages) (4 MB PDF)
Troubleshooting Analog Circuits; 1st Ed; Bob Pease; Newnes; 217 pages; 1991; . (Chapter 8 is 19 pages)
Historical application handbooks
Analog Applications Manual (1979, 418 pages), Signetics. (OpAmps in section 3)
Historical databooks
Linear Databook 1 (1988, 1262 pages), National Semiconductor. (OpAmps in section 2)
Linear and Interface Databook (1990, 1658 pages), Motorola. (OpAmps in section 2)
Linear Databook (1986, 568 pages), RCA.
Historical datasheets
LM301, Single BJT OpAmp, Texas Instruments
LM324, Quad BJT OpAmp, Texas Instruments
LM741, Single BJT OpAmp, Texas Instruments
NE5532, Dual BJT OpAmp, Texas Instruments (NE5534 is similar single)
TL072, Dual JFET OpAmp, Texas Instruments (TL074 is Quad)
External links
Op Amp Circuit Collection- National Semiconductor Corporation
Operational Amplifiers - Chapter on All About Circuits
Loop Gain and its Effects on Analog Circuit Performance - Introduction to loop gain, gain and phase margin, loop stability
Simple Op Amp Measurements How to measure offset voltage, offset and bias current, gain, CMRR, and PSRR.
Operational Amplifiers. Introductory on-line text by E. J. Mastascusa (Bucknell University).
Introduction to op-amp circuit stages, second order filters, single op-amp bandpass filters, and a simple intercom
MOS op amp design: A tutorial overview
Operational Amplifier Noise Prediction (All Op Amps) using spot noise
Operational Amplifier Basics
History of the Op-amp , from vacuum tubes to about 2002
Loebe Julie historical OpAmp interview by Bob Pease
www.PhilbrickArchive.org A free repository of materials from George A Philbrick / Researches - Operational Amplifier Pioneer
What's The Difference Between Operational Amplifiers And Instrumentation Amplifiers? , Electronic Design Magazine
Electronic amplifiers
Linear integrated circuits
Integrated circuits | Operational amplifier | [
"Technology",
"Engineering"
] | 11,502 | [
"Computer engineering",
"Electronic amplifiers",
"Amplifiers",
"Integrated circuits"
] |
22,830 | https://en.wikipedia.org/wiki/Ostwald%20process | The Ostwald process is a chemical process used for making nitric acid (HNO3). The Ostwald process is a mainstay of the modern chemical industry, and it provides the main raw material for the most common type of fertilizer production. Historically and practically, the Ostwald process is closely associated with the Haber process, which provides the requisite raw material, ammonia (NH3). This method is preferred over other methods of nitric acid production, in that it is less expensive and more efficient.
Reactions
Ammonia is converted to nitric acid in 2 stages.
Initial oxidation of ammonia
The Ostwald process begins with burning ammonia. Ammonia burns in oxygen at temperature about and pressure up to in the presence of a catalyst such as platinum gauze, alloyed with 10% rhodium to increase its strength and nitric oxide yield, platinum metal on fused silica wool, copper or nickel to form nitric oxide (nitrogen(II) oxide) and water (as steam). This reaction is strongly exothermic, making it a useful heat source once initiated:
(ΔH = −905.2 kJ/mol)
Side reactions
A number of side reactions compete with the formation of nitric oxide. Some reactions convert the ammonia to N2, such as:
This is a secondary reaction that is minimised by reducing the time the gas mixtures are in contact with the catalyst.
Another side reaction produces nitrous oxide:
(ΔH = −1105 kJ/mol)
Platinum-rhodium catalyst
The platinum and rhodium catalyst is frequently replaced due to decomposition as a result of the extreme conditions which it operates under, leading to a form of degradation called cauliflowering. The exact mechanism of this process is unknown, the main theories being physical degradation by hydrogen atoms penetrating the platinum-rhodium lattice, or by metal atom transport from the centre of the metal to the surface.
Secondary oxidation
The nitric oxide (NO) formed in the prior catalysed reaction is then cooled down from around 900˚C to roughly 250˚C to be further oxidised to nitrogen dioxide (NO2) by the reaction:
(ΔH = -114.2 kJ/mol)
The reaction:
(ΔH = -57.2 kJ/mol)
also occurs once the nitrogen dioxide has formed.
Conversion of nitric oxide
Stage two encompasses the absorption of nitrous oxides in water and is carried out in an absorption apparatus, a plate column containing water. This gas is then readily absorbed by the water, yielding the desired product (nitric acid in a dilute form), while reducing a portion of it back to nitric oxide:
(ΔH = −117 kJ/mol)
The NO is recycled, and the acid is concentrated to the required strength by distillation.
This is only one of over 40 absorption reactions of nitrous oxides recorded, with other common reactions including:
And, if the last step is carried out in air:
(ΔH = −348 kJ/mol).
Overall reaction
The overall reaction is the sum of the first equation, 3 times the second equation, and 2 times the last equation; all divided by 2:
(ΔH = −740.6 kJ/mol)
Alternatively, if the last step is carried out in the air, the overall reaction is the sum of equation 1, 2 times equation 2, and equation 4; all divided by 2.
Without considering the state of the water,
(ΔH = −370.3 kJ/mol)
History
Wilhelm Ostwald developed the process, and he patented it in 1902.
See also
Birkeland–Eyde process
References
External links
Nitrogen & Phosphorus (General Chemistry course), Purdue University
Drake, G; "Processes for the Manufacture of Nitric Acid" (1963), International Fertiliser Society (paysite/password)
Manufacturing Nitrates: the Ostwald process Carlton Comprehensive High School; Prince Albert; Saskatchewan, Canada.
Chemical processes
Industrial processes
Catalysis
German inventions
1902 in science
1902 in Germany | Ostwald process | [
"Chemistry"
] | 849 | [
"Catalysis",
"Chemical processes",
"nan",
"Chemical process engineering",
"Chemical kinetics"
] |
22,915 | https://en.wikipedia.org/wiki/Planet | A planet is a large, rounded astronomical body that is generally required to be in orbit around a star, stellar remnant, or brown dwarf, and is not one itself. The Solar System has eight planets by the most restrictive definition of the term: the terrestrial planets Mercury, Venus, Earth, and Mars, and the giant planets Jupiter, Saturn, Uranus, and Neptune. The best available theory of planet formation is the nebular hypothesis, which posits that an interstellar cloud collapses out of a nebula to create a young protostar orbited by a protoplanetary disk. Planets grow in this disk by the gradual accumulation of material driven by gravity, a process called accretion.
The word planet comes from the Greek () . In antiquity, this word referred to the Sun, Moon, and five points of light visible to the naked eye that moved across the background of the stars—namely, Mercury, Venus, Mars, Jupiter, and Saturn. Planets have historically had religious associations: multiple cultures identified celestial bodies with gods, and these connections with mythology and folklore persist in the schemes for naming newly discovered Solar System bodies. Earth itself was recognized as a planet when heliocentrism supplanted geocentrism during the 16th and 17th centuries.
With the development of the telescope, the meaning of planet broadened to include objects only visible with assistance: the moons of the planets beyond Earth; the ice giants Uranus and Neptune; Ceres and other bodies later recognized to be part of the asteroid belt; and Pluto, later found to be the largest member of the collection of icy bodies known as the Kuiper belt. The discovery of other large objects in the Kuiper belt, particularly Eris, spurred debate about how exactly to define a planet. In 2006, the International Astronomical Union (IAU) adopted a definition of a planet in the Solar System, placing the four terrestrial planets and the four giant planets in the planet category; Ceres, Pluto, and Eris are in the category of dwarf planet. Many planetary scientists have nonetheless continued to apply the term planet more broadly, including dwarf planets as well as rounded satellites like the Moon.
Further advances in astronomy led to the discovery of over five thousand planets outside the Solar System, termed exoplanets. These often show unusual features that the Solar System planets do not show, such as hot Jupiters—giant planets that orbit close to their parent stars, like 51 Pegasi b—and extremely eccentric orbits, such as HD 20782 b. The discovery of brown dwarfs and planets larger than Jupiter also spurred debate on the definition, regarding where exactly to draw the line between a planet and a star. Multiple exoplanets have been found to orbit in the habitable zones of their stars (where liquid water can potentially exist on a planetary surface), but Earth remains the only planet known to support life.
Formation
It is not known with certainty how planets are formed. The prevailing theory is that they coalesce during the collapse of a nebula into a thin disk of gas and dust. A protostar forms at the core, surrounded by a rotating protoplanetary disk. Through accretion (a process of sticky collision) dust particles in the disk steadily accumulate mass to form ever-larger bodies. Local concentrations of mass known as planetesimals form, and these accelerate the accretion process by drawing in additional material by their gravitational attraction. These concentrations become increasingly dense until they collapse inward under gravity to form protoplanets. After a planet reaches a mass somewhat larger than Mars's mass, it begins to accumulate an extended atmosphere, greatly increasing the capture rate of the planetesimals by means of atmospheric drag. Depending on the accretion history of solids and gas, a giant planet, an ice giant, or a terrestrial planet may result. It is thought that the regular satellites of Jupiter, Saturn, and Uranus formed in a similar way; however, Triton was likely captured by Neptune, and Earth's Moon and Pluto's Charon might have formed in collisions.
When the protostar has grown such that it ignites to form a star, the surviving disk is removed from the inside outward by photoevaporation, the solar wind, Poynting–Robertson drag and other effects. Thereafter there still may be many protoplanets orbiting the star or each other, but over time many will collide, either to form a larger, combined protoplanet or release material for other protoplanets to absorb. Those objects that have become massive enough will capture most matter in their orbital neighbourhoods to become planets. Protoplanets that have avoided collisions may become natural satellites of planets through a process of gravitational capture, or remain in belts of other objects to become either dwarf planets or small bodies.
The energetic impacts of the smaller planetesimals (as well as radioactive decay) will heat up the growing planet, causing it to at least partially melt. The interior of the planet begins to differentiate by density, with higher density materials sinking toward the core. Smaller terrestrial planets lose most of their atmospheres because of this accretion, but the lost gases can be replaced by outgassing from the mantle and from the subsequent impact of comets (smaller planets will lose any atmosphere they gain through various escape mechanisms).
With the discovery and observation of planetary systems around stars other than the Sun, it is becoming possible to elaborate, revise or even replace this account. The level of metallicity—an astronomical term describing the abundance of chemical elements with an atomic number greater than 2 (helium)—appears to determine the likelihood that a star will have planets. Hence, a metal-rich population I star is more likely to have a substantial planetary system than a metal-poor, population II star.
Planets in the Solar System
According to the IAU definition, there are eight planets in the Solar System, which are (in increasing distance from the Sun): Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Jupiter is the largest, at 318 Earth masses, whereas Mercury is the smallest, at 0.055 Earth masses.
The planets of the Solar System can be divided into categories based on their composition. Terrestrials are similar to Earth, with bodies largely composed of rock and metal: Mercury, Venus, Earth, and Mars. Earth is the largest terrestrial planet. Giant planets are significantly more massive than the terrestrials: Jupiter, Saturn, Uranus, and Neptune. They differ from the terrestrial planets in composition. The gas giants, Jupiter and Saturn, are primarily composed of hydrogen and helium and are the most massive planets in the Solar System. Saturn is one third as massive as Jupiter, at 95 Earth masses. The ice giants, Uranus and Neptune, are primarily composed of low-boiling-point materials such as water, methane, and ammonia, with thick atmospheres of hydrogen and helium. They have a significantly lower mass than the gas giants (only 14 and 17 Earth masses).
Dwarf planets are gravitationally rounded, but have not cleared their orbits of other bodies. In increasing order of average distance from the Sun, the ones generally agreed among astronomers are , , , , , , , , and . Ceres is the largest object in the asteroid belt, located between the orbits of Mars and Jupiter. The other eight all orbit beyond Neptune. Orcus, Pluto, Haumea, Quaoar, and Makemake orbit in the Kuiper belt, which is a second belt of small Solar System bodies beyond the orbit of Neptune. Gonggong and Eris orbit in the scattered disc, which is somewhat further out and, unlike the Kuiper belt, is unstable towards interactions with Neptune. Sedna is the largest known detached object, a population that never comes close enough to the Sun to interact with any of the classical planets; the origins of their orbits are still being debated. All nine are similar to terrestrial planets in having a solid surface, but they are made of ice and rock rather than rock and metal. Moreover, all of them are smaller than Mercury, with Pluto being the largest known dwarf planet and Eris being the most massive.
There are at least nineteen planetary-mass moons or satellite planets—moons large enough to take on ellipsoidal shapes:
One satellite of Earth: the Moon
Four satellites of Jupiter: Io, Europa, Ganymede, and Callisto
Seven satellites of Saturn: Mimas, Enceladus, Tethys, Dione, Rhea, Titan, and Iapetus
Five satellites of Uranus: Miranda, Ariel, Umbriel, Titania, and Oberon
One satellite of Neptune: Triton
One satellite of Pluto: Charon
The Moon, Io, and Europa have compositions similar to the terrestrial planets; the others are made of ice and rock like the dwarf planets, with Tethys being made of almost pure ice. Europa is often considered an icy planet, though, because its surface ice layer makes it difficult to study its interior. Ganymede and Titan are larger than Mercury by radius, and Callisto almost equals it, but all three are much less massive. Mimas is the smallest object generally agreed to be a geophysical planet, at about six millionths of Earth's mass, though there are many larger bodies that may not be geophysical planets (e.g. ).
Exoplanets
An exoplanet is a planet outside the Solar System. Known exoplanets range in size from gas giants about twice as large as Jupiter down to just over the size of the Moon. Analysis of gravitational microlensing data suggests a minimum average of 1.6 bound planets for every star in the Milky Way.
In early 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12. This discovery was confirmed and is generally considered to be the first definitive detection of exoplanets. Researchers suspect they formed from a disk remnant left over from the supernova that produced the pulsar.
The first confirmed discovery of an exoplanet orbiting an ordinary main-sequence star occurred on 6 October 1995, when Michel Mayor and Didier Queloz of the University of Geneva announced the detection of 51 Pegasi b, an exoplanet around 51 Pegasi. From then until the Kepler space telescope mission, most of the known exoplanets were gas giants comparable in mass to Jupiter or larger as they were more easily detected. The catalog of Kepler candidate planets consists mostly of planets the size of Neptune and smaller, down to smaller than Mercury.
In 2011, the Kepler space telescope team reported the discovery of the first Earth-sized exoplanets orbiting a Sun-like star, Kepler-20e and Kepler-20f. Since that time, more than 100 planets have been identified that are approximately the same size as Earth, 20 of which orbit in the habitable zone of their star—the range of orbits where a terrestrial planet could sustain liquid water on its surface, given enough atmospheric pressure. One in five Sun-like stars is thought to have an Earth-sized planet in its habitable zone, which suggests that the nearest would be expected to be within 12 light-years distance from Earth. The frequency of occurrence of such terrestrial planets is one of the variables in the Drake equation, which estimates the number of intelligent, communicating civilizations that exist in the Milky Way.
There are types of planets that do not exist in the Solar System: super-Earths and mini-Neptunes, which have masses between that of Earth and Neptune. Objects less than about twice the mass of Earth are expected to be rocky like Earth; beyond that, they become a mixture of volatiles and gas like Neptune. The planet Gliese 581c, with a mass 5.5–10.4 times the mass of Earth, attracted attention upon its discovery for potentially being in the habitable zone, though later studies concluded that it is actually too close to its star to be habitable. Planets more massive than Jupiter are also known, extending seamlessly into the realm of brown dwarfs.
Exoplanets have been found that are much closer to their parent star than any planet in the Solar System is to the Sun. Mercury, the closest planet to the Sun at 0.4 AU, takes 88 days for an orbit, but ultra-short period planets can orbit in less than a day. The Kepler-11 system has five of its planets in shorter orbits than Mercury's, all of them much more massive than Mercury. There are hot Jupiters, such as 51 Pegasi b, that orbit very close to their star and may evaporate to become chthonian planets, which are the leftover cores. There are also exoplanets that are much farther from their star. Neptune is 30 AU from the Sun and takes 165 years to orbit, but there are exoplanets that are thousands of AU from their star and take more than a million years to orbit (e.g. COCONUTS-2b).
Attributes
Although each planet has unique physical characteristics, a number of broad commonalities do exist among them. Some of these characteristics, such as rings or natural satellites, have only as yet been observed in planets in the Solar System, whereas others are commonly observed in exoplanets.
Dynamic characteristics
Orbit
In the Solar System, all the planets orbit the Sun in the same direction as the Sun rotates: counter-clockwise as seen from above the Sun's north pole. At least one exoplanet, WASP-17b, has been found to orbit in the opposite direction to its star's rotation. The period of one revolution of a planet's orbit is known as its sidereal period or year. A planet's year depends on its distance from its star; the farther a planet is from its star, the longer the distance it must travel and the slower its speed, since it is less affected by its star's gravity.
No planet's orbit is perfectly circular, and hence the distance of each from the host star varies over the course of its year. The closest approach to its star is called its periastron, or perihelion in the Solar System, whereas its farthest separation from the star is called its apastron (aphelion). As a planet approaches periastron, its speed increases as it trades gravitational potential energy for kinetic energy, just as a falling object on Earth accelerates as it falls. As the planet nears apastron, its speed decreases, just as an object thrown upwards on Earth slows down as it reaches the apex of its trajectory.
Each planet's orbit is delineated by a set of elements:
The eccentricity of an orbit describes the elongation of a planet's elliptical (oval) orbit. Planets with low eccentricities have more circular orbits, whereas planets with high eccentricities have more elliptical orbits. The planets and large moons in the Solar System have relatively low eccentricities, and thus nearly circular orbits. The comets and many Kuiper belt objects, as well as several exoplanets, have very high eccentricities, and thus exceedingly elliptical orbits.
The semi-major axis gives the size of the orbit. It is the distance from the midpoint to the longest diameter of its elliptical orbit. This distance is not the same as its apastron, because no planet's orbit has its star at its exact centre.
The inclination of a planet tells how far above or below an established reference plane its orbit is tilted. In the Solar System, the reference plane is the plane of Earth's orbit, called the ecliptic. For exoplanets, the plane, known as the sky plane or plane of the sky, is the plane perpendicular to the observer's line of sight from Earth. The orbits of the eight major planets of the Solar System all lie very close to the ecliptic; however, some smaller objects like Pallas, Pluto, and Eris orbit at far more extreme angles to it, as do comets. The large moons are generally not very inclined to their parent planets' equators, but Earth's Moon, Saturn's Iapetus, and Neptune's Triton are exceptions. Triton is unique among the large moons in that it orbits retrograde, i.e. in the direction opposite to its parent planet's rotation.
The points at which a planet crosses above and below its reference plane are called its ascending and descending nodes. The longitude of the ascending node is the angle between the reference plane's 0 longitude and the planet's ascending node. The argument of periapsis (or perihelion in the Solar System) is the angle between a planet's ascending node and its closest approach to its star.
Axial tilt
Planets have varying degrees of axial tilt; they spin at an angle to the plane of their stars' equators. This causes the amount of light received by each hemisphere to vary over the course of its year; when the Northern Hemisphere points away from its star, the Southern Hemisphere points towards it, and vice versa. Each planet therefore has seasons, resulting in changes to the climate over the course of its year. The time at which each hemisphere points farthest or nearest from its star is known as its solstice. Each planet has two in the course of its orbit; when one hemisphere has its summer solstice with its day being the longest, the other has its winter solstice when its day is shortest. The varying amount of light and heat received by each hemisphere creates annual changes in weather patterns for each half of the planet. Jupiter's axial tilt is very small, so its seasonal variation is minimal; Uranus, on the other hand, has an axial tilt so extreme it is virtually on its side, which means that its hemispheres are either continually in sunlight or continually in darkness around the time of its solstices. In the Solar System, Mercury, Venus, Ceres, and Jupiter have very small tilts; Pallas, Uranus, and Pluto have extreme ones; and Earth, Mars, Vesta, Saturn, and Neptune have moderate ones. Among exoplanets, axial tilts are not known for certain, though most hot Jupiters are believed to have a negligible axial tilt as a result of their proximity to their stars. Similarly, the axial tilts of the planetary-mass moons are near zero, with Earth's Moon at 6.687° as the biggest exception; additionally, Callisto's axial tilt varies between 0 and about 2 degrees on timescales of thousands of years.
Rotation
The planets rotate around invisible axes through their centres. A planet's rotation period is known as a stellar day. Most of the planets in the Solar System rotate in the same direction as they orbit the Sun, which is counter-clockwise as seen from above the Sun's north pole. The exceptions are Venus and Uranus, which rotate clockwise, though Uranus's extreme axial tilt means there are differing conventions on which of its poles is "north", and therefore whether it is rotating clockwise or anti-clockwise. Regardless of which convention is used, Uranus has a retrograde rotation relative to its orbit.
The rotation of a planet can be induced by several factors during formation. A net angular momentum can be induced by the individual angular momentum contributions of accreted objects. The accretion of gas by the giant planets contributes to the angular momentum. Finally, during the last stages of planet building, a stochastic process of protoplanetary accretion can randomly alter the spin axis of the planet. There is great variation in the length of day between the planets, with Venus taking 243 days to rotate, and the giant planets only a few hours. The rotational periods of exoplanets are not known, but for hot Jupiters, their proximity to their stars means that they are tidally locked (that is, their orbits are in sync with their rotations). This means, they always show one face to their stars, with one side in perpetual day, the other in perpetual night. Mercury and Venus, the closest planets to the Sun, similarly exhibit very slow rotation: Mercury is tidally locked into a 3:2 spin–orbit resonance (rotating three times for every two revolutions around the Sun), and Venus's rotation may be in equilibrium between tidal forces slowing it down and atmospheric tides created by solar heating speeding it up.
All the large moons are tidally locked to their parent planets; Pluto and Charon are tidally locked to each other, as are Eris and Dysnomia, and probably and its moon Vanth. The other dwarf planets with known rotation periods rotate faster than Earth; Haumea rotates so fast that it has been distorted into a triaxial ellipsoid. The exoplanet Tau Boötis b and its parent star Tau Boötis appear to be mutually tidally locked.
Orbital clearing
The defining dynamic characteristic of a planet, according to the IAU definition, is that it has cleared its neighborhood. A planet that has cleared its neighborhood has accumulated enough mass to gather up or sweep away all the planetesimals in its orbit. In effect, it orbits its star in isolation, as opposed to sharing its orbit with a multitude of similar-sized objects. As described above, this characteristic was mandated as part of the IAU's official definition of a planet in August 2006. Although to date this criterion only applies to the Solar System, a number of young extrasolar systems have been found in which evidence suggests orbital clearing is taking place within their circumstellar discs.
Physical characteristics
Size and shape
Gravity causes planets to be pulled into a roughly spherical shape, so a planet's size can be expressed roughly by an average radius (for example, Earth radius or Jupiter radius). However, planets are not perfectly spherical; for example, the Earth's rotation causes it to be slightly flattened at the poles with a bulge around the equator. Therefore, a better approximation of Earth's shape is an oblate spheroid, whose equatorial diameter is larger than the pole-to-pole diameter. Generally, a planet's shape may be described by giving polar and equatorial radii of a spheroid or specifying a reference ellipsoid. From such a specification, the planet's flattening, surface area, and volume can be calculated; its normal gravity can be computed knowing its size, shape, rotation rate, and mass.
Mass
A planet's defining physical characteristic is that it is massive enough for the force of its own gravity to dominate over the electromagnetic forces binding its physical structure, leading to a state of hydrostatic equilibrium. This effectively means that all planets are spherical or spheroidal. Up to a certain mass, an object can be irregular in shape, but beyond that point, which varies depending on the chemical makeup of the object, gravity begins to pull an object towards its own centre of mass until the object collapses into a sphere.
Mass is the prime attribute by which planets are distinguished from stars. No objects between the masses of the Sun and Jupiter exist in the Solar System, but there are exoplanets of this size. The lower stellar mass limit is estimated to be around 75 to 80 times that of Jupiter (). Some authors advocate that this be used as the upper limit for planethood, on the grounds that the internal physics of objects does not change between approximately one Saturn mass (beginning of significant self-compression) and the onset of hydrogen burning and becoming a red dwarf star. Beyond roughly 13 (at least for objects with solar-type isotopic abundance), an object achieves conditions suitable for nuclear fusion of deuterium: this has sometimes been advocated as a boundary, even though deuterium burning does not last very long and most brown dwarfs have long since finished burning their deuterium. This is not universally agreed upon: the exoplanets Encyclopaedia includes objects up to 60 , and the Exoplanet Data Explorer up to 24 .
The smallest known exoplanet with an accurately known mass is PSR B1257+12A, one of the first exoplanets discovered, which was found in 1992 in orbit around a pulsar. Its mass is roughly half that of the planet Mercury. Even smaller is WD 1145+017 b, orbiting a white dwarf; its mass is roughly that of the dwarf planet Haumea, and it is typically termed a minor planet. The smallest known planet orbiting a main-sequence star other than the Sun is Kepler-37b, with a mass (and radius) that is probably slightly higher than that of the Moon. The smallest object in the Solar System generally agreed to be a geophysical planet is Saturn's moon Mimas, with a radius about 3.1% of Earth's and a mass about 0.00063% of Earth's. Saturn's smaller moon Phoebe, currently an irregular body of 1.7% Earth's radius and 0.00014% Earth's mass, is thought to have attained hydrostatic equilibrium and differentiation early in its history before being battered out of shape by impacts. Some asteroids may be fragments of protoplanets that began to accrete and differentiate, but suffered catastrophic collisions, leaving only a metallic or rocky core today, or a reaccumulation of the resulting debris.
Internal differentiation
Every planet began its existence in an entirely fluid state; in early formation, the denser, heavier materials sank to the centre, leaving the lighter materials near the surface. Each therefore has a differentiated interior consisting of a dense planetary core surrounded by a mantle that either is or was a fluid. The terrestrial planets' mantles are sealed within hard crusts, but in the giant planets the mantle simply blends into the upper cloud layers. The terrestrial planets have cores of elements such as iron and nickel and mantles of silicates. Jupiter and Saturn are believed to have cores of rock and metal surrounded by mantles of metallic hydrogen. Uranus and Neptune, which are smaller, have rocky cores surrounded by mantles of water, ammonia, methane, and other ices. The fluid action within these planets' cores creates a geodynamo that generates a magnetic field. Similar differentiation processes are believed to have occurred on some of the large moons and dwarf planets, though the process may not always have been completed: Ceres, Callisto, and Titan appear to be incompletely differentiated. The asteroid Vesta, though not a dwarf planet because it was battered by impacts out of roundness, has a differentiated interior similar to that of Venus, Earth, and Mars.
Atmosphere
All of the Solar System planets except Mercury have substantial atmospheres because their gravity is strong enough to keep gases close to the surface. Saturn's largest moon Titan also has a substantial atmosphere thicker than that of Earth; Neptune's largest moon Triton and the dwarf planet Pluto have more tenuous atmospheres. The larger giant planets are massive enough to keep large amounts of the light gases hydrogen and helium, whereas the smaller planets lose these gases into space. Analysis of exoplanets suggests that the threshold for being able to hold on to these light gases occurs at about , so that Earth and Venus are near the maximum size for rocky planets.
The composition of Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen. The atmospheres of Mars and Venus are both dominated by carbon dioxide, but differ drastically in density: the average surface pressure of Mars's atmosphere is less than 1% that of Earth's (too low to allow liquid water to exist), while the average surface pressure of Venus's atmosphere is about 92 times that of Earth's. It is likely that Venus's atmosphere was the result of a runaway greenhouse effect in its history, which today makes it the hottest planet by surface temperature, hotter even than Mercury. Despite hostile surface conditions, temperature, and pressure at about 50–55 km altitude in Venus's atmosphere are close to Earthlike conditions (the only place in the Solar System beyond Earth where this is so), and this region has been suggested as a plausible base for future human exploration. Titan has the only nitrogen-rich planetary atmosphere in the Solar System other than Earth's. Just as Earth's conditions are close to the triple point of water, allowing it to exist in all three states on the planet's surface, so Titan's are to the triple point of methane.
Planetary atmospheres are affected by the varying insolation or internal energy, leading to the formation of dynamic weather systems such as hurricanes (on Earth), planet-wide dust storms (on Mars), a greater-than-Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). Weather patterns detected on exoplanets include a hot region on HD 189733 b twice the size of the Great Red Spot, as well as clouds on the hot Jupiter Kepler-7b, the super-Earth Gliese 1214 b, and others.
Hot Jupiters, due to their extreme proximities to their host stars, have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides that produce supersonic winds, although multiple factors are involved and the details of the atmospheric dynamics that affect the day-night temperature difference are complex.
Magnetosphere
One important characteristic of the planets is their intrinsic magnetic moments, which in turn give rise to magnetospheres. The presence of a magnetic field indicates that the planet is still geologically alive. In other words, magnetized planets have flows of electrically conducting material in their interiors, which generate their magnetic fields. These fields significantly change the interaction of the planet and solar wind. A magnetized planet creates a cavity in the solar wind around itself called the magnetosphere, which the wind cannot penetrate. The magnetosphere can be much larger than the planet itself. In contrast, non-magnetized planets have only small magnetospheres induced by interaction of the ionosphere with the solar wind, which cannot effectively protect the planet.
Of the eight planets in the Solar System, only Venus and Mars lack such a magnetic field. Of the magnetized planets, the magnetic field of Mercury is the weakest and is barely able to deflect the solar wind. Jupiter's moon Ganymede has a magnetic field several times stronger, and Jupiter's is the strongest in the Solar System (so intense in fact that it poses a serious health risk to future crewed missions to all its moons inward of Callisto). The magnetic fields of the other giant planets, measured at their surfaces, are roughly similar in strength to that of Earth, but their magnetic moments are significantly larger. The magnetic fields of Uranus and Neptune are strongly tilted relative to the planets' rotational axes and displaced from the planets' centres.
In 2003, a team of astronomers in Hawaii observing the star HD 179949 detected a bright spot on its surface, apparently created by the magnetosphere of an orbiting hot Jupiter.
Secondary characteristics
Several planets or dwarf planets in the Solar System (such as Neptune and Pluto) have orbital periods that are in resonance with each other or with smaller bodies. This is common in satellite systems (e.g. the resonance between Io, Europa, and Ganymede around Jupiter, or between Enceladus and Dione around Saturn). All except Mercury and Venus have natural satellites, often called "moons". Earth has one, Mars has two, and the giant planets have numerous moons in complex planetary-type systems. Except for Ceres and Sedna, all the consensus dwarf planets are known to have at least one moon as well. Many moons of the giant planets have features similar to those on the terrestrial planets and dwarf planets, and some have been studied as possible abodes of life (especially Europa and Enceladus).
The four giant planets are orbited by planetary rings of varying size and complexity. The rings are composed primarily of dust or particulate matter, but can host tiny 'moonlets' whose gravity shapes and maintains their structure. Although the origins of planetary rings are not precisely known, they are believed to be the result of natural satellites that fell below their parent planets' Roche limits and were torn apart by tidal forces. The dwarf planets Haumea and Quaoar also have rings.
No secondary characteristics have been observed around exoplanets. The sub-brown dwarf Cha 110913−773444, which has been described as a rogue planet, is believed to be orbited by a tiny protoplanetary disc, and the sub-brown dwarf OTS 44 was shown to be surrounded by a substantial protoplanetary disk of at least 10 Earth masses.
History and etymology
The idea of planets has evolved over the history of astronomy, from the divine lights of antiquity to the earthly objects of the scientific age. The concept has expanded to include worlds not only in the Solar System, but in multitudes of other extrasolar systems. The consensus as to what counts as a planet, as opposed to other objects, has changed several times. It previously encompassed asteroids, moons, and dwarf planets like Pluto, and there continues to be some disagreement today.
Ancient civilizations and classical planets
The five classical planets of the Solar System, being visible to the naked eye, have been known since ancient times and have had a significant impact on mythology, religious cosmology, and ancient astronomy. In ancient times, astronomers noted how certain lights moved across the sky, as opposed to the "fixed stars", which maintained a constant relative position in the sky. Ancient Greeks called these lights () or simply () from which today's word "planet" was derived. In ancient Greece, China, Babylon, and indeed all pre-modern civilizations, it was almost universally believed that Earth was the center of the Universe and that all the "planets" circled Earth. The reasons for this perception were that stars and planets appeared to revolve around Earth each day and the apparently common-sense perceptions that Earth was solid and stable and that it was not moving but at rest.
Babylon
The first civilization known to have a functional theory of the planets were the Babylonians, who lived in Mesopotamia in the first and second millennia BC. The oldest surviving planetary astronomical text is the Babylonian Venus tablet of Ammisaduqa, a 7th-century BC copy of a list of observations of the motions of the planet Venus, that probably dates as early as the second millennium BC. The MUL.APIN is a pair of cuneiform tablets dating from the 7th century BC that lays out the motions of the Sun, Moon, and planets over the course of the year. Late Babylonian astronomy is the origin of Western astronomy and indeed all Western efforts in the exact sciences. The Enuma anu enlil, written during the Neo-Assyrian period in the 7th century BC, comprises a list of omens and their relationships with various celestial phenomena including the motions of the planets. The inferior planets Venus and Mercury and the superior planets Mars, Jupiter, and Saturn were all identified by Babylonian astronomers. These would remain the only known planets until the invention of the telescope in early modern times.
Greco-Roman astronomy
The ancient Greeks initially did not attach as much significance to the planets as the Babylonians. In the 6th and 5th centuries BC, the Pythagoreans appear to have developed their own independent planetary theory, which consisted of the Earth, Sun, Moon, and planets revolving around a "Central Fire" at the center of the Universe. Pythagoras or Parmenides is said to have been the first to identify the evening star (Hesperos) and morning star (Phosphoros) as one and the same (Aphrodite, Greek corresponding to Latin Venus), though this had long been known in Mesopotamia. In the 3rd century BC, Aristarchus of Samos proposed a heliocentric system, according to which Earth and the planets revolved around the Sun. The geocentric system remained dominant until the Scientific Revolution.
By the 1st century BC, during the Hellenistic period, the Greeks had begun to develop their own mathematical schemes for predicting the positions of the planets. These schemes, which were based on geometry rather than the arithmetic of the Babylonians, would eventually eclipse the Babylonians' theories in complexity and comprehensiveness and account for most of the astronomical movements observed from Earth with the naked eye. These theories would reach their fullest expression in the Almagest written by Ptolemy in the 2nd century CE. So complete was the domination of Ptolemy's model that it superseded all previous works on astronomy and remained the definitive astronomical text in the Western world for 13 centuries. To the Greeks and Romans, there were seven known planets, each presumed to be circling Earth according to the complex laws laid out by Ptolemy. They were, in increasing order from Earth (in Ptolemy's order and using modern names): the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and Saturn.
Medieval astronomy
After the fall of the Western Roman Empire, astronomy developed further in India and the medieval Islamic world. In 499 CE, the Indian astronomer Aryabhata propounded a planetary model that explicitly incorporated Earth's rotation about its axis, which he explains as the cause of what appears to be an apparent westward motion of the stars. He also theorized that the orbits of planets were elliptical. Aryabhata's followers were particularly strong in South India, where his principles of the diurnal rotation of Earth, among others, were followed and a number of secondary works were based on them.
The astronomy of the Islamic Golden Age mostly took place in the Middle East, Central Asia, Al-Andalus, and North Africa, and later in the Far East and India. These astronomers, like the polymath Ibn al-Haytham, generally accepted geocentrism, although they did dispute Ptolemy's system of epicycles and sought alternatives. The 10th-century astronomer Abu Sa'id al-Sijzi accepted that the Earth rotates around its axis. In the 11th century, the transit of Venus was observed by Avicenna. His contemporary Al-Biruni devised a method of determining the Earth's radius using trigonometry that, unlike the older method of Eratosthenes, only required observations at a single mountain.
Scientific Revolution and discovery of outer planets
With the advent of the Scientific Revolution and the heliocentric model of Copernicus, Galileo, and Kepler, use of the term "planet" changed from something that moved around the sky relative to the fixed star to a body that orbited the Sun, directly (a primary planet) or indirectly (a secondary or satellite planet). Thus the Earth was added to the roster of planets, and the Sun was removed. The Copernican count of primary planets stood until 1781, when William Herschel discovered Uranus.
When four satellites of Jupiter (the Galilean moons) and five of Saturn were discovered in the 17th century, they joined Earth's Moon in the category of "satellite planets" or "secondary planets" orbiting the primary planets, though in the following decades they would come to be called simply "satellites" for short. Scientists generally considered planetary satellites to also be planets until about the 1920s, although this usage was not common among non-scientists.
In the first decade of the 19th century, four new 'planets' were discovered: Ceres (in 1801), Pallas (in 1802), Juno (in 1804), and Vesta (in 1807). It soon became apparent that they were rather different from previously known planets: they shared the same general region of space, between Mars and Jupiter (the asteroid belt), with sometimes overlapping orbits. This was an area where only one planet had been expected, and they were much smaller than all other planets; indeed, it was suspected that they might be shards of a larger planet that had broken up. Herschel called them asteroids (from the Greek for "starlike") because even in the largest telescopes they resembled stars, without a resolvable disk.
The situation was stable for four decades, but in the 1840s several additional asteroids were discovered (Astraea in 1845; Hebe, Iris, and Flora in 1847; Metis in 1848; and Hygiea in 1849). New "planets" were discovered every year; as a result, astronomers began tabulating the asteroids (minor planets) separately from the major planets and assigning them numbers instead of abstract planetary symbols, although they continued to be considered as small planets.
Neptune was discovered in 1846, its position having been predicted thanks to its gravitational influence upon Uranus. Because the orbit of Mercury appeared to be affected in a similar way, it was believed in the late 19th century that there might be another planet even closer to the Sun. However, the discrepancy between Mercury's orbit and the predictions of Newtonian gravity was instead explained by an improved theory of gravity, Einstein's general relativity.
Pluto was discovered in 1930. After initial observations led to the belief that it was larger than Earth, the object was immediately accepted as the ninth major planet. Further monitoring found the body was actually much smaller: in 1936, Ray Lyttleton suggested that Pluto may be an escaped satellite of Neptune, and Fred Whipple suggested in 1964 that Pluto may be a comet. The discovery of its large moon Charon in 1978 showed that Pluto was only 0.2% the mass of Earth. As this was still substantially more massive than any known asteroid, and because no other trans-Neptunian objects had been discovered at that time, Pluto kept its planetary status, only officially losing it in 2006.
In the 1950s, Gerard Kuiper published papers on the origin of the asteroids. He recognized that asteroids were typically not spherical, as had previously been thought, and that the asteroid families were remnants of collisions. Thus he differentiated between the largest asteroids as "true planets" versus the smaller ones as collisional fragments. From the 1960s onwards, the term "minor planet" was mostly displaced by the term "asteroid", and references to the asteroids as planets in the literature became scarce, except for the geologically evolved largest three: Ceres, and less often Pallas and Vesta.
The beginning of Solar System exploration by space probes in the 1960s spurred a renewed interest in planetary science. A split in definitions regarding satellites occurred around then: planetary scientists began to reconsider the large moons as also being planets, but astronomers who were not planetary scientists generally did not. (This is not exactly the same as the definition used in the previous century, which classed all satellites as secondary planets, even non-round ones like Saturn's Hyperion or Mars's Phobos and Deimos.) All the eight major planets and their planetary-mass moons have since been explored by spacecraft, as have many asteroids and the dwarf planets Ceres and Pluto; however, so far the only planetary-mass body beyond Earth that has been explored by humans is the Moon.
Defining the term planet
A growing number of astronomers argued for Pluto to be declassified as a planet, because many similar objects approaching its size had been found in the same region of the Solar System (the Kuiper belt) during the 1990s and early 2000s. Pluto was found to be just one "small" body in a population of thousands. They often referred to the demotion of the asteroids as a precedent, although that had been done based on their geophysical differences from planets rather than their being in a belt. Some of the larger trans-Neptunian objects, such as Quaoar, Sedna, Eris, and Haumea, were heralded in the popular press as the tenth planet.
The announcement of Eris in 2005, an object 27% more massive than Pluto, created the impetus for an official definition of a planet, as considering Pluto a planet would logically have demanded that Eris be considered a planet as well. Since different procedures were in place for naming planets versus non-planets, this created an urgent situation because under the rules Eris could not be named without defining what a planet was. At the time, it was also thought that the size required for a trans-Neptunian object to become round was about the same as that required for the moons of the giant planets (about 400 km diameter), a figure that would have suggested about 200 round objects in the Kuiper belt and thousands more beyond. Many astronomers argued that the public would not accept a definition creating a large number of planets.
To acknowledge the problem, the International Astronomical Union (IAU) set about creating the definition of planet and produced one in August 2006. Under this definition, the Solar System is considered to have eight planets (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune). Bodies that fulfill the first two conditions but not the third are classified as dwarf planets, provided they are not natural satellites of other planets. Originally an IAU committee had proposed a definition that would have included a larger number of planets as it did not include (c) as a criterion. After much discussion, it was decided via a vote that those bodies should instead be classified as dwarf planets.
Criticisms and alternatives to IAU definition
The IAU definition has not been universally used or accepted. In planetary geology, celestial objects are defined as planets by geophysical characteristics. A celestial body may acquire a dynamic (planetary) geology at approximately the mass required for its mantle to become plastic under its own weight. This leads to a state of hydrostatic equilibrium where the body acquires a stable, round shape, which is adopted as the hallmark of planethood by geophysical definitions. For example:
In the Solar System, this mass is generally less than the mass required for a body to clear its orbit; thus, some objects that are considered "planets" under geophysical definitions are not considered as such under the IAU definition, such as Ceres and Pluto. (In practice, the requirement for hydrostatic equilibrium is universally relaxed to a requirement for rounding and compaction under self-gravity; Mercury is not actually in hydrostatic equilibrium, but is universally included as a planet regardless.) Proponents of such definitions often argue that location should not matter and that planethood should be defined by the intrinsic properties of an object. Dwarf planets had been proposed as a category of small planet (as opposed to planetoids as sub-planetary objects) and planetary geologists continue to treat them as planets despite the IAU definition.
The number of dwarf planets even among known objects is not certain. In 2019, Grundy et al. argued based on the low densities of some mid-sized trans-Neptunian objects that the limiting size required for a trans-Neptunian object to reach equilibrium was in fact much larger than it is for the icy moons of the giant planets, being about 900–1000 km diameter. There is general consensus on Ceres in the asteroid belt and on the eight trans-Neptunians that probably cross this threshold—, , , , , , , and .
Planetary geologists may include the nineteen known planetary-mass moons as "satellite planets", including Earth's Moon and Pluto's Charon, like the early modern astronomers. Some go even further and include as planets relatively large, geologically evolved bodies that are nonetheless not very round today, such as Pallas and Vesta; rounded bodies that were completely disrupted by impacts and re-accreted like Hygiea; or even everything at least the diameter of Saturn's moon Mimas, the smallest planetary-mass moon. (This may even include objects that are not round but happen to be larger than Mimas, like Neptune's moon Proteus.)
Astronomer Jean-Luc Margot proposed a mathematical criterion that determines whether an object can clear its orbit during the lifetime of its host star, based on the mass of the planet, its semimajor axis, and the mass of its host star. The formula produces a value called that is greater than 1 for planets. The eight known planets and all known exoplanets have values above 100, while Ceres, Pluto, and Eris have values of 0.1, or less. Objects with values of 1 or more are expected to be approximately spherical, so that objects that fulfill the orbital-zone clearance requirement around Sun-like stars will also fulfill the roundness requirement – though this may not be the case around very low-mass stars. In 2024, Margot and collaborators proposed a revised version of the criterion with a uniform clearing timescale of 10 billion years (the approximate main-sequence lifetime of the Sun) or 13.8 billion years (the age of the Universe) to accommodate planets orbiting brown dwarfs.
Exoplanets
Even before the discovery of exoplanets, there were particular disagreements over whether an object should be considered a planet if it was part of a distinct population such as a belt, or if it was large enough to generate energy by the thermonuclear fusion of deuterium. Complicating the matter even further, bodies too small to generate energy by fusing deuterium can form by gas-cloud collapse just like stars and brown dwarfs, even down to the mass of Jupiter: there was thus disagreement about whether how a body formed should be taken into account.
In 1992, astronomers Aleksander Wolszczan and Dale Frail announced the discovery of planets around a pulsar, PSR B1257+12. This discovery is generally considered to be the first definitive detection of a planetary system around another star. Then, on 6 October 1995, Michel Mayor and Didier Queloz of the Geneva Observatory announced the first definitive detection of an exoplanet orbiting an ordinary main-sequence star (51 Pegasi).
The discovery of exoplanets led to another ambiguity in defining a planet: the point at which a planet becomes a star. Many known exoplanets are many times the mass of Jupiter, approaching that of stellar objects known as brown dwarfs. Brown dwarfs are generally considered stars due to their theoretical ability to fuse deuterium, a heavier isotope of hydrogen. Although objects more massive than 75 times that of Jupiter fuse simple hydrogen, objects of 13 Jupiter masses can fuse deuterium. Deuterium is quite rare, constituting less than 0.0026% of the hydrogen in the galaxy, and most brown dwarfs would have ceased fusing deuterium long before their discovery, making them effectively indistinguishable from supermassive planets.
IAU working definition of exoplanets
The 2006 IAU definition presents some challenges for exoplanets because the language is specific to the Solar System and the criteria of roundness and orbital zone clearance are not presently observable for exoplanets. In 2018, this definition was reassessed and updated as knowledge of exoplanets increased. The current official working definition of an exoplanet is as follows:
The IAU noted that this definition could be expected to evolve as knowledge improves. A 2022 review article discussing the history and rationale of this definition suggested that the words "in young star clusters" should be deleted in clause 3, as such objects have now been found elsewhere, and that the term "sub-brown dwarfs" should be replaced by the more current "free-floating planetary mass objects". The term "planetary mass object" has also been used to refer to ambiguous situations concerning exoplanets, such as objects with mass typical for a planet that are free-floating or orbit a brown dwarf instead of a star. Free-floating objects of planetary mass have sometimes been called planets anyway, specifically rogue planets.
The limit of 13 Jupiter masses is not universally accepted. Objects below this mass limit can sometimes burn deuterium, and the amount of deuterium that is burned depends on an object's composition. Furthermore, deuterium is quite scarce, so the stage of deuterium burning does not actually last very long; unlike hydrogen burning in a star, deuterium burning does not significantly affect the future evolution of an object. The relationship between mass and radius (or density) show no special feature at this limit, according to which brown dwarfs have the same physics and internal structure as lighter Jovian planets, and would more naturally be considered planets.
Thus, many catalogues of exoplanets include objects heavier than 13 Jupiter masses, sometimes going up to 60 Jupiter masses. (The limit for hydrogen burning and becoming a red dwarf star is about 80 Jupiter masses.) The situation of main-sequence stars has been used to argue for such an inclusive definition of "planet" as well, as they also differ greatly along the two orders of magnitude that they cover, in their structure, atmospheres, temperature, spectral features, and probably formation mechanisms; yet they are all considered as one class, being all hydrostatic-equilibrium objects undergoing nuclear burning.
Mythology and naming
The naming of planets differs between planets of the Solar System and exoplanets (planets of other planetary systems). Exoplanets are commonly named after their parent star and their order of discovery within its planetary system, such as Proxima Centauri b. (The lettering starts at b, with a considered to represent the parent star.)
The names for the planets of the Solar System (other than Earth) in the English language are derived from naming practices developed consecutively by the Babylonians, Greeks, and Romans of antiquity. The practice of grafting the names of gods onto the planets was almost certainly borrowed from the Babylonians by the ancient Greeks, and thereafter from the Greeks by the Romans. The Babylonians named Venus after the Sumerian goddess of love with the Akkadian name Ishtar; Mars after their god of war, Nergal; Mercury after their god of wisdom Nabu; and Jupiter after their chief god, Marduk. There are too many concordances between Greek and Babylonian naming conventions for them to have arisen separately. Given the differences in mythology, the correspondence was not perfect. For instance, the Babylonian Nergal was a god of war, and thus the Greeks identified him with Ares. Unlike Ares, Nergal was also a god of pestilence and ruler of the underworld.
In ancient Greece, the two great luminaries, the Sun and the Moon, were called Helios and Selene, two ancient Titanic deities; the slowest planet, Saturn, was called Phainon, the shiner; followed by Phaethon, Jupiter, "bright"; the red planet, Mars was known as Pyroeis, the "fiery"; the brightest, Venus, was known as Phosphoros, the light bringer; and the fleeting final planet, Mercury, was called Stilbon, the gleamer. The Greeks assigned each planet to one among their pantheon of gods, the Olympians and the earlier Titans:
Helios and Selene were the names of both planets and gods, both of them Titans (later supplanted by Olympians Apollo and Artemis);
Phainon was sacred to Cronus, the Titan who fathered the Olympians;
Phaethon was sacred to Zeus, Cronus's son who deposed him as king;
Pyroeis was given to Ares, son of Zeus and god of war;
Phosphoros was ruled by Aphrodite, the goddess of love; and
Stilbon with its speedy motion, was ruled over by Hermes, messenger of the gods and god of learning and wit.
Although modern Greeks still use their ancient names for the planets, other European languages, because of the influence of the Roman Empire and, later, the Catholic Church, use the Roman (Latin) names rather than the Greek ones. The Romans inherited Proto-Indo-European mythology as the Greeks did and shared with them a common pantheon under different names, but the Romans lacked the rich narrative traditions that Greek poetic culture had given their gods. During the later period of the Roman Republic, Roman writers borrowed much of the Greek narratives and applied them to their own pantheon, to the point where they became virtually indistinguishable. When the Romans studied Greek astronomy, they gave the planets their own gods' names: Mercurius (for Hermes), Venus (Aphrodite), Mars (Ares), Iuppiter (Zeus), and Saturnus (Cronus). However, there was not much agreement on which god a particular planet was associated with; according to Pliny the Elder, while Phainon and Phaethon's associations with Saturn and Jupiter respectively were widely agreed upon, Pyroeis was also associated with the demi-god Hercules, Stilbon was also associated with Apollo, god of music, healing, and prophecy; Phosphoros was also associated with prominent goddesses Juno and Isis. Some Romans, following a belief possibly originating in Mesopotamia but developed in Hellenistic Egypt, believed that the seven gods after whom the planets were named took hourly shifts in looking after affairs on Earth. The order of shifts went Saturn, Jupiter, Mars, Sun, Venus, Mercury, Moon (from the farthest to the closest planet). Therefore, the first day was started by Saturn (1st hour), second day by Sun (25th hour), followed by Moon (49th hour), Mars, Mercury, Jupiter, and Venus. Because each day was named by the god that started it, this became the order of the days of the week in the Roman calendar. In English, Saturday, Sunday, and Monday are straightforward translations of these Roman names. The other days were renamed after Tīw (Tuesday), Wōden (Wednesday), Þunor (Thursday), and Frīġ (Friday), the Anglo-Saxon gods considered similar or equivalent to Mars, Mercury, Jupiter, and Venus, respectively.
Earth's name in English is not derived from Greco-Roman mythology. Because it was only generally accepted as a planet in the 17th century, there is no tradition of naming it after a god. (The same is true, in English at least, of the Sun and the Moon, though they are no longer generally considered planets.) The name originates from the Old English word eorþe, which was the word for "ground" and "dirt" as well as the world itself. As with its equivalents in the other Germanic languages, it derives ultimately from the Proto-Germanic word erþō, as can be seen in the English earth, the German Erde, the Dutch aarde, and the Scandinavian jord. Many of the Romance languages retain the old Roman word terra (or some variation of it) that was used with the meaning of "dry land" as opposed to "sea". The non-Romance languages use their own native words. The Greeks retain their original name, Γή (Ge).
Non-European cultures use other planetary-naming systems. India uses a system based on the Navagraha, which incorporates the seven traditional planets and the ascending and descending lunar nodes Rahu and Ketu. The planets are Surya 'Sun', Chandra 'Moon', Budha for Mercury, Shukra ('bright') for Venus, Mangala (the god of war) for Mars, (councilor of the gods) for Jupiter, and Shani (symbolic of time) for Saturn.
The native Persian names of most of the planets are based on identifications of the Mesopotamian gods with Iranian gods, analogous to the Greek and Latin names. Mercury is Tir (Persian: ) for the western Iranian god Tīriya (patron of scribes), analogous to Nabu; Venus is Nāhid () for Anahita; Mars is Bahrām () for Verethragna; and Jupiter is Hormoz () for Ahura Mazda. The Persian name for Saturn, Keyvān (), is a borrowing from Akkadian kajamānu, meaning "the permanent, steady".
China and the countries of eastern Asia historically subject to Chinese cultural influence (such as Japan, Korea, and Vietnam) use a naming system based on the five Chinese elements: water (Mercury 水星 "water star"), metal (Venus 金星 "metal star"), fire (Mars 火星 "fire star"), wood (Jupiter 木星 "wood star"), and earth (Saturn 土星 "earth star"). The names of Uranus (天王星 "sky king star"), Neptune (海王星 "sea king star"), and Pluto (冥王星 "underworld king star") in Chinese, Korean, and Japanese are calques based on the roles of those gods in Roman and Greek mythology. In the 19th century, Alexander Wylie and Li Shanlan calqued the names of the first 117 asteroids into Chinese, and many of their names are still used today, e.g. Ceres (穀神星 "grain goddess star"), Pallas (智神星 "wisdom goddess star"), Juno (婚神星 "marriage goddess star"), Vesta (灶神星 "hearth goddess star"), and Hygiea (健神星 "health goddess star"). Such translations were extended to some later minor planets, including some of the dwarf planets discovered in the 21st century, e.g. Haumea (妊神星 "pregnancy goddess star"), Makemake (鳥神星 "bird goddess star"), and Eris (鬩神星 "quarrel goddess star"). However, except for the better-known asteroids and dwarf planets, many of them are rare outside Chinese astronomical dictionaries.
In traditional Hebrew astronomy, the seven traditional planets have (for the most part) descriptive names—the Sun is חמה Ḥammah or "the hot one", the Moon is לבנה Levanah or "the white one", Venus is כוכב נוגה Kokhav Nogah or "the bright planet", Mercury is כוכב Kokhav or "the planet" (given its lack of distinguishing features), Mars is מאדים Ma'adim or "the red one", and Saturn is שבתאי Shabbatai or "the resting one" (in reference to its slow movement compared to the other visible planets). The odd one out is Jupiter, called צדק Tzedeq or "justice". These names, first attested in the Babylonian Talmud, are not the original Hebrew names of the planets. In 377 Epiphanius of Salamis recorded another set of names that seem to have pagan or Canaanite associations: those names, since replaced for religious reasons, were probably the historical Semitic names, and may have much earlier roots going back to Babylonian astronomy. Hebrew names were chosen for Uranus (אורון Oron, "small light") and Neptune (רהב Rahab, a Biblical sea monster) in 2009; prior to that the names "Uranus" and "Neptune" had simply been borrowed. The etymologies for the Arabic names of the planets are less well understood. Mostly agreed among scholars are Venus (Arabic: , az-Zuhara, "the bright one"), Earth (, al-ʾArḍ, from the same root as eretz), and Saturn (, Zuḥal, "withdrawer"). Multiple suggested etymologies exist for Mercury (, ʿUṭārid), Mars (, al-Mirrīkh), and Jupiter (, al-Muštarī), but there is no agreement among scholars.
When subsequent planets were discovered in the 18th and 19th centuries, Uranus was named for a Greek deity and Neptune for a Roman one (the counterpart of Poseidon). The asteroids were initially named from mythology as well—Ceres, Juno, and Vesta are major Roman goddesses, and Pallas is an epithet of the major Greek goddess Athena—but as more and more were discovered, they first started being named after more minor goddesses, and the mythological restriction was dropped starting from the twentieth asteroid Massalia in 1852. Pluto (named after the Greek god of the underworld) was given a classical name, as it was considered a major planet when it was discovered. After more objects were discovered beyond Neptune, naming conventions depending on their orbits were put in place: those in the 2:3 resonance with Neptune (the plutinos) are given names from underworld myths, while others are given names from creation myths. Most of the trans-Neptunian planetoids are named after gods and goddesses from other cultures (e.g. Quaoar is named after a Tongva god). There are a few exceptions which continue the Roman and Greek scheme, notably including Eris as it had initially been considered a tenth planet.
The moons (including the planetary-mass ones) are generally given names with some association with their parent planet. The planetary-mass moons of Jupiter are named after four of Zeus' lovers (or other sexual partners); those of Saturn are named after Cronus' brothers and sisters, the Titans; those of Uranus are named after characters from Shakespeare and Pope (originally specifically from fairy mythology, but that ended with the naming of Miranda). Neptune's planetary-mass moon Triton is named after the god's son; Pluto's planetary-mass moon Charon is named after the ferryman of the dead, who carries the souls of the newly deceased to the underworld (Pluto's domain).
Symbols
The written symbols for Mercury, Venus, Jupiter, Saturn, and possibly Mars have been traced to forms found in late Greek papyrus texts. The symbols for Jupiter and Saturn are identified as monograms of the corresponding Greek names, and the symbol for Mercury is a stylized caduceus.
According to Annie Scott Dill Maunder, antecedents of the planetary symbols were used in art to represent the gods associated with the classical planets. Bianchini's planisphere, discovered by Francesco Bianchini in the 18th century but produced in the 2nd century, shows Greek personifications of planetary gods charged with early versions of the planetary symbols. Mercury has a caduceus; Venus has, attached to her necklace, a cord connected to another necklace; Mars, a spear; Jupiter, a staff; Saturn, a scythe; the Sun, a circlet with rays radiating from it; and the Moon, a headdress with a crescent attached. The modern shapes with the cross-marks first appeared around the 16th century. According to Maunder, the addition of crosses appears to be "an attempt to give a savour of Christianity to the symbols of the old pagan gods." Earth itself was not considered a classical planet; its symbol descends from a pre-heliocentric symbol for the four corners of the world.
When further planets were discovered orbiting the Sun, symbols were invented for them. The most common astronomical symbol for Uranus, ⛢, was invented by Johann Gottfried Köhler, and was intended to represent the newly discovered metal platinum. An alternative symbol, ♅, was invented by Jérôme Lalande, and represents a globe with a H on top, for Uranus's discoverer Herschel. Today, ⛢ is mostly used by astronomers and ♅ by astrologers, though it is possible to find each symbol in the other context. The first few asteroids were considered to be planets when they were discovered, and were likewise given abstract symbols, e.g. Ceres' sickle (⚳), Pallas' spear (⚴), Juno's sceptre (⚵), and Vesta's hearth (⚶). However, as their number rose further and further, this practice stopped in favour of numbering them instead. (Massalia, the first asteroid not named from mythology, is also the first asteroid that was not assigned a symbol by its discoverer.) The symbols for the first four asteroids, Ceres through Vesta, remained in use for longer than the others, and even in the modern day NASA has used the Ceres symbol—Ceres being the only asteroid that is also a dwarf planet. Neptune's symbol (♆) represents the god's trident. The astronomical symbol for Pluto is a P-L monogram (♇), though it has become less common since the IAU definition reclassified Pluto. Since Pluto's reclassification, NASA has used the traditional astrological symbol of Pluto (⯓), a planetary orb over Pluto's bident.
The IAU discourages the use of planetary symbols in modern journal articles in favour of one-letter or (to disambiguate Mercury and Mars) two-letter abbreviations for the major planets. The symbols for the Sun and Earth are nonetheless common, as solar mass, Earth mass, and similar units are common in astronomy. Other planetary symbols today are mostly encountered in astrology. Astrologers have resurrected the old astronomical symbols for the first few asteroids and continue to invent symbols for other objects. This includes relatively standard astrological symbols for the dwarf planets discovered in the 21st century, which were not given symbols by astronomers because planetary symbols had mostly fallen out of use in astronomy by the time they were discovered. Many astrological symbols are included in Unicode, and a few of these new inventions (the symbols of Haumea, Makemake, and Eris) have since been used by NASA in astronomy. The Eris symbol is a traditional one from Discordianism, a religion worshipping the goddess Eris. The other dwarf-planet symbols are mostly initialisms (except Haumea) in the native scripts of the cultures they come from; they also represent something associated with the corresponding deity or culture, e.g. Makemake's face or Gonggong's snake-tail.
See also
List of landings on extraterrestrial bodies
Lists of planets – A list of lists of planets sorted by diverse attributes
Notes
References
External links
Photojournal NASA
Planetary Science Research Discoveries (educational site with illustrated articles)
Planetary science
Observational astronomy
Concepts in astronomy
Solar System | Planet | [
"Physics",
"Astronomy"
] | 14,428 | [
"Astronomical sub-disciplines",
"Outer space",
"Concepts in astronomy",
"Observational astronomy",
"Planetary science",
"Astronomical objects",
"Solar System",
"Planets"
] |
22,934 | https://en.wikipedia.org/wiki/Probability | Probability is the branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. This number is often expressed as a percentage (%), ranging from 0% to 100%. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).
These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems.
Etymology
The word probability derives from the Latin , which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which in contrast is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference.
Interpretations
When dealing with random experiments – i.e., experiments that are random and well-defined – in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to as theoretical probability (in contrast to empirical probability, dealing with probabilities in the context of real experiments). For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability:
Objectivists assign numbers to describe some objective or physical state of affairs. The most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome when the experiment is repeated indefinitely. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, even if it is performed only once.
Subjectivists assign numbers per subjective probability, that is, as a degree of belief. The degree of belief has been interpreted as "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E", although that interpretation is not universally agreed upon. The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some (subjective) prior probability distribution. These data are incorporated in a likelihood function. The product of the prior and the likelihood, when normalized, results in a posterior probability distribution that incorporates all the information known to date. By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions, regardless of how much information the agents share.
History
The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability throughout history, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues are still obscured by superstitions.
According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.
The sixteenth-century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes).
Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability.
The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve.
The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the errordisregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error. The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old."
Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,
where is a constant depending on precision of observation, and is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known.
In the nineteenth century, authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.
In 1906, Andrey Markov introduced the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on measure theory was developed by Andrey Kolmogorov in 1931.
On the geometric side, contributors to The Educational Times included Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin. See integral geometry for more information.
Theory
Like other theories, the theory of probability is a representation of its concepts in formal termsthat is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain.
There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see also probability space), sets are interpreted as events and probability as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.
There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the usually-understood laws of probability.
Applications
Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis, and financial regulation.
An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.
In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play.
Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty.
The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory.
Mathematical treatment
Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as . The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred.
A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events.
The probability of an event A is written as , , or . This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure.
The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as , , or ; its probability is given by . As an example, the chance of not rolling a six on a six-sided die is For a more comprehensive treatment, see Complementary event.
If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as
Independent events
If two events, A and B are independent then the joint probability is
For example, if two coins are flipped, then the chance of both being heads is
Mutually exclusive events
If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events.
If two events are mutually exclusive, then the probability of both occurring is denoted as andIf two events are mutually exclusive, then the probability of either occurring is denoted as and
For example, the chance of rolling a 1 or 2 on a six-sided die is
Not (necessarily) mutually exclusive events
If the events are not (necessarily) mutually exclusive thenRewritten,
For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once.
This can be expanded further for multiple not (necessarily) mutually exclusive events. For three events, this proceeds as follows:It can be seen, then, that this pattern can be repeated for any number of events.
Conditional probability
Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written , and is read "the probability of A, given B". It is defined by
If then is formally undefined by this expression. In this case and are independent, since However, it is possible to define a conditional probability for some zero-probability events, for example by using a σ-algebra of such events (such as those arising from a continuous random variable).
For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be
Inverse probability
In probability theory and applications, Bayes' rule relates the odds of event to event before (prior to) and after (posterior to) conditioning on another event The odds on to event is simply the ratio of the probabilities of the two events. When arbitrarily many events are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood, where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as varies, for fixed or given (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005).
Summary of probabilities
Relation to randomness and probability in quantum mechanics
In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon) (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in the kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant ) that only a statistical description of its properties is feasible.
Probability theory is required to describe quantum phenomena. A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.
See also
Contingency
Equiprobability
Fuzzy logic
Heuristic (psychology)
Notes
References
Bibliography
Kallenberg, O. (2005) Probabilistic Symmetries and Invariance Principles. Springer-Verlag, New York. 510 pp.
Kallenberg, O. (2002) Foundations of Modern Probability, 2nd ed. Springer Series in Statistics. 650 pp.
Olofsson, Peter (2005) Probability, Statistics, and Stochastic Processes, Wiley-Interscience. 504 pp .
External links
Virtual Laboratories in Probability and Statistics (Univ. of Ala.-Huntsville)
Probability and Statistics EBook
Edwin Thompson Jaynes. Probability Theory: The Logic of Science. Preprint: Washington University, (1996). – HTML index with links to PostScript files and PDF (first three chapters)
People from the History of Probability and Statistics (Univ. of Southampton)
Probability and Statistics on the Earliest Uses Pages (Univ. of Southampton)
Earliest Uses of Symbols in Probability and Statistics on Earliest Uses of Various Mathematical Symbols
A tutorial on probability and Bayes' theorem devised for first-year Oxford University students
U B U W E B :: La Monte Young pdf file of An Anthology of Chance Operations (1963) at UbuWeb
Introduction to Probability – eBook , by Charles Grinstead, Laurie Snell Source (GNU Free Documentation License)
Bruno de Finetti, Probabilità e induzione, Bologna, CLUEB, 1993. (digital version)
Richard Feynman's Lecture on probability. | Probability | [
"Physics",
"Mathematics"
] | 4,263 | [
"Wikipedia categories named after physical quantities",
"Probability",
"Probability and statistics",
"Physical quantities"
] |
23,000 | https://en.wikipedia.org/wiki/Polynomial | In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate is . An example with three indeterminates is .
Polynomials appear in many areas of mathematics and science. For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated scientific problems; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; and they are used in calculus and numerical analysis to approximate other functions. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, which are central concepts in algebra and algebraic geometry.
Etymology
The word polynomial joins two diverse roots: the Greek poly, meaning "many", and the Latin nomen, or "name". It was derived from the term binomial by replacing the Latin root bi- with the Greek poly-. That is, it means a sum of many terms (many monomials). The word polynomial was first used in the 17th century.
Notation and terminology
The x occurring in a polynomial is commonly called a variable or an indeterminate. When the polynomial is considered as an expression, x is a fixed symbol which does not have any value (its value is "indeterminate"). However, when one considers the function defined by the polynomial, then x represents the argument of the function, and is therefore called a "variable". Many authors use these two words interchangeably.
A polynomial P in the indeterminate x is commonly denoted either as P or as P(x). Formally, the name of the polynomial is P, not P(x), but the use of the functional notation P(x) dates from a time when the distinction between a polynomial and the associated function was unclear. Moreover, the functional notation is often useful for specifying, in a single phrase, a polynomial and its indeterminate. For example, "let P(x) be a polynomial" is a shorthand for "let P be a polynomial in the indeterminate x". On the other hand, when it is not necessary to emphasize the name of the indeterminate, many formulas are much simpler and easier to read if the name(s) of the indeterminate(s) do not appear at each occurrence of the polynomial.
The ambiguity of having two notations for a single mathematical object may be formally resolved by considering the general meaning of the functional notation for polynomials.
If a denotes a number, a variable, another polynomial, or, more generally, any expression, then P(a) denotes, by convention, the result of substituting a for x in P. Thus, the polynomial P defines the function
which is the polynomial function associated to P.
Frequently, when using this notation, one supposes that a is a number. However, one may use it over any domain where addition and multiplication are defined (that is, any ring). In particular, if a is a polynomial then P(a) is also a polynomial.
More specifically, when a is the indeterminate x, then the image of x by this function is the polynomial P itself (substituting x for x does not change anything). In other words,
which justifies formally the existence of two notations for the same polynomial.
Definition
A polynomial expression is an expression that can be built from constants and symbols called variables or indeterminates by means of addition, multiplication and exponentiation to a non-negative integer power. The constants are generally numbers, but may be any expression that do not involve the indeterminates, and represent mathematical objects that can be added and multiplied. Two polynomial expressions are considered as defining the same polynomial if they may be transformed, one to the other, by applying the usual properties of commutativity, associativity and distributivity of addition and multiplication. For example and are two polynomial expressions that represent the same polynomial; so, one has the equality .
A polynomial in a single indeterminate can always be written (or rewritten) in the form
where are constants that are called the coefficients of the polynomial, and is the indeterminate. The word "indeterminate" means that represents no particular value, although any value may be substituted for it. The mapping that associates the result of this substitution to the substituted value is a function, called a polynomial function.
This can be expressed more concisely by using summation notation:
That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero terms. Each term consists of the product of a number called the coefficient of the term and a finite number of indeterminates, raised to non-negative integer powers.
Classification
The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a polynomial is the largest degree of any term with nonzero coefficient. Because , the degree of an indeterminate without a written exponent is one.
A term with no indeterminates and a polynomial with no indeterminates are called, respectively, a constant term and a constant polynomial. The degree of a constant term and of a nonzero constant polynomial is 0. The degree of the zero polynomial 0 (which has no terms at all) is generally treated as not defined (but see below).
For example:
is a term. The coefficient is , the indeterminates are and , the degree of is two, while the degree of is one. The degree of the entire term is the sum of the degrees of each indeterminate in it, so in this example the degree is .
Forming a sum of several terms produces a polynomial. For example, the following is a polynomial:
It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero.
Polynomials of small degree have been given specific names. A polynomial of degree zero is a constant polynomial, or simply a constant. Polynomials of degree one, two or three are respectively linear polynomials, quadratic polynomials and cubic polynomials. For higher degrees, the specific names are not commonly used, although quartic polynomial (for degree four) and quintic polynomial (for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, the term in is a linear term in a quadratic polynomial.
The polynomial 0, which may be considered to have no terms at all, is called the zero polynomial. Unlike other constant polynomials, its degree is not zero. Rather, the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞). The zero polynomial is also unique in that it is the only polynomial in one indeterminate that has an infinite number of roots. The graph of the zero polynomial, , is the x-axis.
In the case of polynomials in more than one indeterminate, a polynomial is called homogeneous of if all of its non-zero terms have . The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined. For example, is homogeneous of degree 5. For more details, see Homogeneous polynomial.
The commutative law of addition can be used to rearrange terms into any preferred order. In polynomials with one indeterminate, the terms are usually ordered according to degree, either in "descending powers of ", with the term of largest degree first, or in "ascending powers of ". The polynomial is written in descending powers of . The first term has coefficient , indeterminate , and exponent . In the second term, the coefficient . The third term is a constant. Because the degree of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two.
Two terms with the same indeterminates raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the distributive law, into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0. Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a monomial, a two-term polynomial is called a binomial, and a three-term polynomial is called a trinomial.
A real polynomial is a polynomial with real coefficients. When it is used to define a function, the domain is not so restricted. However, a real polynomial function is a function from the reals to the reals that is defined by a real polynomial. Similarly, an integer polynomial is a polynomial with integer coefficients, and a complex polynomial is a polynomial with complex coefficients.
A polynomial in one indeterminate is called a univariate polynomial, a polynomial in more than one indeterminate is called a multivariate polynomial. A polynomial with two indeterminates is called a bivariate polynomial. These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance, when working with univariate polynomials, one does not exclude constant polynomials (which may result from the subtraction of non-constant polynomials), although strictly speaking, constant polynomials do not contain any indeterminates at all. It is possible to further classify multivariate polynomials as bivariate, trivariate, and so on, according to the maximum number of indeterminates allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is also common to say simply "polynomials in , and ", listing the indeterminates allowed.
Operations
Addition and subtraction
Polynomials can be added using the associative law of addition (grouping all their terms together into a single sum), possibly followed by reordering (using the commutative law) and combining of like terms. For example, if
and
then the sum
can be reordered and regrouped as
and then simplified to
When polynomials are added together, the result is another polynomial.
Subtraction of polynomials is similar.
Multiplication
Polynomials can also be multiplied. To expand the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other. For example, if
then
Carrying out the multiplication in each term produces
Combining similar terms yields
which can be simplified to
As in the example, the product of polynomials is always a polynomial.
Composition
Given a polynomial of a single variable and another polynomial of any number of variables, the composition is obtained by substituting each copy of the variable of the first polynomial by the second polynomial. For example, if and then
A composition may be expanded to a sum of terms using the rules for multiplication and division of polynomials. The composition of two polynomials is another polynomial.
Division
The division of one polynomial by another is not typically a polynomial. Instead, such ratios are a more general family of objects, called rational fractions, rational expressions, or rational functions, depending on context. This is analogous to the fact that the ratio of two integers is a rational number, not necessarily an integer. For example, the fraction is not a polynomial, and it cannot be written as a finite sum of powers of the variable .
For polynomials in one variable, there is a notion of Euclidean division of polynomials, generalizing the Euclidean division of integers. This notion of the division results in two polynomials, a quotient and a remainder , such that and . The quotient and remainder may be computed by any of several algorithms, including polynomial long division and synthetic division.
When the denominator is monic and linear, that is, for some constant , then the polynomial remainder theorem asserts that the remainder of the division of by is the evaluation . In this case, the quotient may be computed by Ruffini's rule, a special case of synthetic division.
Factoring
All polynomials with coefficients in a unique factorization domain (for example, the integers or a field) also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field of complex numbers, the irreducible factors are linear. Over the real numbers, they have the degree either one or two. Over the integers and the rational numbers the irreducible factors may have any degree. For example, the factored form of
is
over the integers and the reals, and
over the complex numbers.
The computation of the factored form, called factorization is, in general, too difficult to be done by hand-written computation. However, efficient polynomial factorization algorithms are available in most computer algebra systems.
Calculus
Calculating derivatives and integrals of polynomials is particularly simple, compared to other kinds of functions.
The derivative of the polynomial with respect to is the polynomial
Similarly, the general antiderivative (or indefinite integral) of is
where is an arbitrary constant. For example, antiderivatives of have the form .
For polynomials whose coefficients come from more abstract settings (for example, if the coefficients are integers modulo some prime number , or elements of an arbitrary ring), the formula for the derivative can still be interpreted formally, with the coefficient understood to mean the sum of copies of . For example, over the integers modulo , the derivative of the polynomial is the polynomial .
Polynomial functions
A polynomial function is a function that can be defined by evaluating a polynomial. More precisely, a function of one argument from a given domain is a polynomial function if there exists a polynomial
that evaluates to for all in the domain of (here, is a non-negative integer and are constant coefficients).
Generally, unless otherwise specified, polynomial functions have complex coefficients, arguments, and values. In particular, a polynomial, restricted to have real coefficients, defines a function from the complex numbers to the complex numbers. If the domain of this function is also restricted to the reals, the resulting function is a real function that maps reals to reals.
For example, the function , defined by
is a polynomial function of one variable. Polynomial functions of several variables are similarly defined, using polynomials in more than one indeterminate, as in
According to the definition of polynomial functions, there may be expressions that obviously are not polynomials but nevertheless define polynomial functions. An example is the expression which takes the same values as the polynomial on the interval , and thus both expressions define the same polynomial function on this interval.
Every polynomial function is continuous, smooth, and entire.
The evaluation of a polynomial is the computation of the corresponding polynomial function; that is, the evaluation consists of substituting a numerical value to each indeterminate and carrying out the indicated multiplications and additions.
For polynomials in one indeterminate, the evaluation is usually more efficient (lower number of arithmetic operations to perform) using Horner's method, which consists of rewriting the polynomial as
Graphs
A polynomial function in one real variable can be represented by a graph.
The graph of the zero polynomial
is the -axis.
The graph of a degree 0 polynomial
is a horizontal line with
The graph of a degree 1 polynomial (or linear function)
is an oblique line with and slope .
The graph of a degree 2 polynomial
is a parabola.
The graph of a degree 3 polynomial
is a cubic curve.
The graph of any polynomial with degree 2 or greater
is a continuous non-linear curve.
A non-constant polynomial function tends to infinity when the variable increases indefinitely (in absolute value). If the degree is higher than one, the graph does not have any asymptote. It has two parabolic branches with vertical direction (one branch for positive x and one for negative x).
Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior.
Equations
A polynomial equation, also called an algebraic equation, is an equation of the form
For example,
is a polynomial equation.
When considering equations, the indeterminates (variables) of polynomials are also called unknowns, and the solutions are the possible values of the unknowns for which the equality is true (in general more than one solution may exist). A polynomial equation stands in contrast to a polynomial identity like , where both expressions represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality.
In elementary algebra, methods such as the quadratic formula are taught for solving all first degree and second degree polynomial equations in one variable. There are also formulas for the cubic and quartic equations. For higher degrees, the Abel–Ruffini theorem asserts that there can not exist a general formula in radicals. However, root-finding algorithms may be used to find numerical approximations of the roots of a polynomial expression of any degree.
The number of solutions of a polynomial equation with real coefficients may not exceed the degree, and equals the degree when the complex solutions are counted with their multiplicity. This fact is called the fundamental theorem of algebra.
Solving equations
A root of a nonzero univariate polynomial is a value of such that . In other words, a root of is a solution of the polynomial equation or a zero of the polynomial function defined by . In the case of the zero polynomial, every number is a zero of the corresponding function, and the concept of root is rarely considered.
A number is a root of a polynomial if and only if the linear polynomial divides , that is if there is another polynomial such that . It may happen that a power (greater than ) of divides ; in this case, is a multiple root of , and otherwise is a simple root of . If is a nonzero polynomial, there is a highest power such that divides , which is called the multiplicity of as a root of . The number of roots of a nonzero polynomial , counted with their respective multiplicities, cannot exceed the degree of , and equals this degree if all complex roots are considered (this is a consequence of the fundamental theorem of algebra).
The coefficients of a polynomial and its roots are related by Vieta's formulas.
Some polynomials, such as , do not have any roots among the real numbers. If, however, the set of accepted solutions is expanded to the complex numbers, every non-constant polynomial has at least one root; this is the fundamental theorem of algebra. By successively dividing out factors , one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial.
There may be several meanings of "solving an equation". One may want to express the solutions as explicit numbers; for example, the unique solution of is . This is, in general, impossible for equations of degree greater than one, and, since the ancient times, mathematicians have searched to express the solutions as algebraic expressions; for example, the golden ratio is the unique positive solution of In the ancient times, they succeeded only for degrees one and two. For quadratic equations, the quadratic formula provides such expressions of the solutions. Since the 16th century, similar formulas (using cube roots in addition to square roots), although much more complicated, are known for equations of degree three and four (see cubic equation and quartic equation). But formulas for degree 5 and higher eluded researchers for several centuries. In 1824, Niels Henrik Abel proved the striking result that there are equations of degree 5 whose solutions cannot be expressed by a (finite) formula, involving only arithmetic operations and radicals (see Abel–Ruffini theorem). In 1830, Évariste Galois proved that most equations of degree higher than four cannot be solved by radicals, and showed that for each equation, one may decide whether it is solvable by radicals, and, if it is, solve it. This result marked the start of Galois theory and group theory, two important branches of modern algebra. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (see quintic function and sextic equation).
When there is no algebraic expression for the roots, and when such an algebraic expression exists but is too complicated to be useful, the unique way of solving it is to compute numerical approximations of the solutions. There are many methods for that; some are restricted to polynomials and others may apply to any continuous function. The most efficient algorithms allow solving easily (on a computer) polynomial equations of degree higher than 1,000 (see Root-finding algorithm).
For polynomials with more than one indeterminate, the combinations of values for the variables for which the polynomial function takes the value zero are generally called zeros instead of "roots". The study of the sets of zeros of polynomials is the object of algebraic geometry. For a set of polynomial equations with several unknowns, there are algorithms to decide whether they have a finite number of complex solutions, and, if this number is finite, for computing the solutions. See System of polynomial equations.
The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination.
A polynomial equation for which one is interested only in the solutions which are integers is called a Diophantine equation. Solving Diophantine equations is generally a very hard task. It has been proved that there cannot be any general algorithm for solving them, or even for deciding whether the set of solutions is empty (see Hilbert's tenth problem). Some of the most famous problems that have been solved during the last fifty years are related to Diophantine equations, such as Fermat's Last Theorem.
Polynomial expressions
Polynomials where indeterminates are substituted for some other mathematical objects are often considered, and sometimes have a special name.
Trigonometric polynomials
A trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions.
If sin(nx) and cos(nx) are expanded in terms of sin(x) and cos(x), a trigonometric polynomial becomes a polynomial in the two variables sin(x) and cos(x) (using List of trigonometric identities#Multiple-angle formulae). Conversely, every polynomial in sin(x) and cos(x) may be converted, with Product-to-sum identities, into a linear combination of functions sin(nx) and cos(nx). This equivalence explains why linear combinations are called polynomials.
For complex coefficients, there is no difference between such a function and a finite Fourier series.
Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are also used in the discrete Fourier transform.
Matrix polynomials
A matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial
this polynomial evaluated at a matrix A is
where I is the identity matrix.
A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring Mn(R).
Exponential polynomials
A bivariate polynomial where the second variable is substituted for an exponential function applied to the first variable, for example , may be called an exponential polynomial.
Related concepts
Rational functions
A rational fraction is the quotient (algebraic fraction) of two polynomials. Any algebraic expression that can be rewritten as a rational fraction is a rational function.
While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not zero.
The rational fractions include the Laurent polynomials, but do not limit denominators to powers of an indeterminate.
Laurent polynomials
Laurent polynomials are like polynomials, but allow negative powers of the variable(s) to occur.
Power series
Formal power series are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just like irrational numbers cannot), but the rules for manipulating their terms are the same as for polynomials. Non-formal power series also generalize polynomials, but the multiplication of two power series may not converge.
Polynomial ring
A polynomial over a commutative ring is a polynomial all of whose coefficients belong to . It is straightforward to verify that the polynomials in a given set of indeterminates over form a commutative ring, called the polynomial ring in these indeterminates, denoted in the univariate case and in the multivariate case.
One has
So, most of the theory of the multivariate case can be reduced to an iterated univariate case.
The map from to sending to itself considered as a constant polynomial is an injective ring homomorphism, by which is viewed as a subring of . In particular, is an algebra over .
One can think of the ring as arising from by adding one new element x to R, and extending in a minimal way to a ring in which satisfies no other relations than the obligatory ones, plus commutation with all elements of (that is ). To do this, one must add all powers of and their linear combinations as well.
Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring over the real numbers by factoring out the ideal of multiples of the polynomial . Another example is the construction of finite fields, which proceeds similarly, starting out with the field of integers modulo some prime number as the coefficient ring (see modular arithmetic).
If is commutative, then one can associate with every polynomial in a polynomial function with domain and range equal to . (More generally, one can take domain and range to be any same unital associative algebra over .) One obtains the value by substitution of the value for the symbol in . One reason to distinguish between polynomials and polynomial functions is that, over some rings, different polynomials may give rise to the same polynomial function (see Fermat's little theorem for an example where is the integers modulo ). This is not the case when is the real or complex numbers, whence the two concepts are not always distinguished in analysis. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like Euclidean division) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for .
Divisibility
If is an integral domain and and are polynomials in , it is said that divides or is a divisor of if there exists a polynomial in such that . If then is a root of if and only divides . In this case, the quotient can be computed using the polynomial long division.
If is a field and and are polynomials in with , then there exist unique polynomials and in with
and such that the degree of is smaller than the degree of (using the convention that the polynomial 0 has a negative degree). The polynomials and are uniquely determined by and . This is called Euclidean division, division with remainder or polynomial long division and shows that the ring is a Euclidean domain.
Analogously, prime polynomials (more correctly, irreducible polynomials) can be defined as non-zero polynomials which cannot be factorized into the product of two non-constant polynomials. In the case of coefficients in a ring, "non-constant" must be replaced by "non-constant or non-unit" (both definitions agree in the case of coefficients in a field). Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials. If the coefficients belong to a field or a unique factorization domain this decomposition is unique up to the order of the factors and the multiplication of any non-unit factor by a unit (and division of the unit factor by the same unit). When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (see Factorization of polynomials). These algorithms are not practicable for hand-written computation, but are available in any computer algebra system. Eisenstein's criterion can also be used in some cases to determine irreducibility.
Applications
Positional notation
In modern positional numbers systems, such as the decimal system, the digits and their positions in the representation of an integer, for example, 45, are a shorthand notation for a polynomial in the radix or base, in this case, . As another example, in radix 5, a string of digits such as 132 denotes the (decimal) number = 42. This representation is unique. Let b be a positive integer greater than 1. Then every positive integer a can be expressed uniquely in the form
where m is a nonnegative integer and the r'''s are integers such that
and for .
Interpolation and approximation
The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An important example in calculus is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial function, and the Stone–Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial function. Practical methods of approximation include polynomial interpolation and the use of splines.
Other applications
Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element. The chromatic polynomial of a graph counts the number of proper colourings of that graph.
The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, in computational complexity theory the phrase polynomial time means that the time it takes to complete an algorithm is bounded by a polynomial function of some variable, such as the size of the input.
History
Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese Arithmetic in Nine Sections, , begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write .
History of the notation
The earliest known use of the equal sign is in Robert Recorde's The Whetstone of Witte, 1557. The signs + for addition, − for subtraction, and the use of a letter for an unknown appear in Michael Stifel's Arithemetica integra, 1544. René Descartes, in La géometrie'', 1637, introduced the concept of the graph of a polynomial equation. He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where the s denote constants and denotes a variable. Descartes introduced the use of superscripts to denote exponents as well.
See also
List of polynomial topics
Notes
References
.
. This classical book covers most of the content of this article.
External links
Algebra | Polynomial | [
"Mathematics"
] | 6,731 | [
"Polynomials",
"Algebra"
] |
23,001 | https://en.wikipedia.org/wiki/Polymer | A polymer () is a substance or material that consists of very large molecules, or macromolecules, that are constituted by many repeating subunits derived from one or more species of monomers. Due to their broad spectrum of properties, both synthetic and natural polymers play essential and ubiquitous roles in everyday life. Polymers range from familiar synthetic plastics such as polystyrene to natural biopolymers such as DNA and proteins that are fundamental to biological structure and function. Polymers, both natural and synthetic, are created via polymerization of many small molecules, known as monomers. Their consequently large molecular mass, relative to small molecule compounds, produces unique physical properties including toughness, high elasticity, viscoelasticity, and a tendency to form amorphous and semicrystalline structures rather than crystals.
Polymers are studied in the fields of polymer science (which includes polymer chemistry and polymer physics), biophysics and materials science and engineering. Historically, products arising from the linkage of repeating units by covalent chemical bonds have been the primary focus of polymer science. An emerging important area now focuses on supramolecular polymers formed by non-covalent links. Polyisoprene of latex rubber is an example of a natural polymer, and the polystyrene of styrofoam is an example of a synthetic polymer. In biological contexts, essentially all biological macromolecules—i.e., proteins (polyamides), nucleic acids (polynucleotides), and polysaccharides—are purely polymeric, or are composed in large part of polymeric components.
Etymology
The term "polymer" derives . The term was coined in 1833 by Jöns Jacob Berzelius, though with a definition distinct from the modern IUPAC definition. The modern concept of polymers as covalently bonded macromolecular structures was proposed in 1920 by Hermann Staudinger, who spent the next decade finding experimental evidence for this hypothesis.
Common examples
Polymers are of two types: naturally occurring and synthetic or man made.
Natural
Natural polymeric materials such as hemp, shellac, amber, wool, silk, and natural rubber have been used for centuries. A variety of other natural polymers exist, such as cellulose, which is the main constituent of wood and paper.
Space polymer
Hemoglycin (previously termed hemolithin) is a space polymer that is the first polymer of amino acids found in meteorites.
Synthetic
The list of synthetic polymers, roughly in order of worldwide demand, includes polyethylene, polypropylene, polystyrene, polyvinyl chloride, synthetic rubber, phenol formaldehyde resin (or Bakelite), neoprene, nylon, polyacrylonitrile, PVB, silicone, and many more. More than 330 million tons of these polymers are made every year (2015).
Most commonly, the continuously linked backbone of a polymer used for the preparation of plastics consists mainly of carbon atoms. A simple example is polyethylene ('polythene' in British English), whose repeat unit or monomer is ethylene. Many other structures do exist; for example, elements such as silicon form familiar materials such as silicones, examples being Silly Putty and waterproof plumbing sealant. Oxygen is also commonly present in polymer backbones, such as those of polyethylene glycol, polysaccharides (in glycosidic bonds), and DNA (in phosphodiester bonds).
Synthesis
Polymerization is the process of combining many small molecules known as monomers into a covalently bonded chain or network. During the polymerization process, some chemical groups may be lost from each monomer. This happens in the polymerization of PET polyester. The monomers are terephthalic acid (HOOCC6H4COOH) and ethylene glycol (HOCH2CH2OH) but the repeating unit is OCC6H4COOCH2CH2O, which corresponds to the combination of the two monomers with the loss of two water molecules. The distinct piece of each monomer that is incorporated into the polymer is known as a repeat unit or monomer residue.
Synthetic methods are generally divided into two categories, step-growth polymerization and chain polymerization. The essential difference between the two is that in chain polymerization, monomers are added to the chain one at a time only, such as in polystyrene, whereas in step-growth polymerization chains of monomers may combine with one another directly, such as in polyester. Step-growth polymerization can be divided into polycondensation, in which low-molar-mass by-product is formed in every reaction step, and polyaddition.
Newer methods, such as plasma polymerization do not fit neatly into either category. Synthetic polymerization reactions may be carried out with or without a catalyst. Laboratory synthesis of biopolymers, especially of proteins, is an area of intensive research.
Biological synthesis
There are three main classes of biopolymers: polysaccharides, polypeptides, and polynucleotides.
In living cells, they may be synthesized by enzyme-mediated processes, such as the formation of DNA catalyzed by DNA polymerase. The synthesis of proteins involves multiple enzyme-mediated processes to transcribe genetic information from the DNA to RNA and subsequently translate that information to synthesize the specified protein from amino acids. The protein may be modified further following translation in order to provide appropriate structure and functioning. There are other biopolymers such as rubber, suberin, melanin, and lignin.
Modification of natural polymers
Naturally occurring polymers such as cotton, starch, and rubber were familiar materials for years before synthetic polymers such as polyethene and perspex appeared on the market. Many commercially important polymers are synthesized by chemical modification of naturally occurring polymers. Prominent examples include the reaction of nitric acid and cellulose to form nitrocellulose and the formation of vulcanized rubber by heating natural rubber in the presence of sulfur. Ways in which polymers can be modified include oxidation, cross-linking, and end-capping.
Structure
The structure of a polymeric material can be described at different length scales, from the sub-nm length scale up to the macroscopic one. There is in fact a hierarchy of structures, in which each stage provides the foundations for the next one.
The starting point for the description of the structure of a polymer is the identity of its constituent monomers. Next, the microstructure essentially describes the arrangement of these monomers within the polymer at the scale of a single chain. The microstructure determines the possibility for the polymer to form phases with different arrangements, for example through crystallization, the glass transition or microphase separation.
These features play a major role in determining the physical and chemical properties of a polymer.
Monomers and repeat units
The identity of the repeat units (monomer residues, also known as "mers") comprising a polymer is its first and most important attribute. Polymer nomenclature is generally based upon the type of monomer residues comprising the polymer. A polymer which contains only a single type of repeat unit is known as a homopolymer, while a polymer containing two or more types of repeat units is known as a copolymer. A terpolymer is a copolymer which contains three types of repeat units.
Polystyrene is composed only of styrene-based repeat units, and is classified as a homopolymer. Polyethylene terephthalate, even though produced from two different monomers (ethylene glycol and terephthalic acid), is usually regarded as a homopolymer because only one type of repeat unit is formed. Ethylene-vinyl acetate contains more than one variety of repeat unit and is a copolymer. Some biological polymers are composed of a variety of different but structurally related monomer residues; for example, polynucleotides such as DNA are composed of four types of nucleotide subunits.
{| class="wikitable" style="text-align:left; font-size:90%;" width="80%"
|-
| class="hintergrundfarbe6" align="center" colspan="4" |Homopolymers and copolymers (examples)
|- style="vertical-align:top" class="hintergrundfarbe2"
|
|
|
|
|- style="vertical-align:top"
| Homopolymer polystyrene
| Homopolymer polydimethylsiloxane, a silicone. The main chain is formed of silicon and oxygen atoms.
| The homopolymer polyethylene terephthalate has only one repeat unit.
| Copolymer styrene-butadiene rubber: The repeat units based on styrene and 1,3-butadiene form two repeating units, which can alternate in any order in the macromolecule, making the polymer thus a random copolymer.
|}
A polymer containing ionizable subunits (e.g., pendant carboxylic groups) is known as a polyelectrolyte or ionomer, when the fraction of ionizable units is large or small respectively.
Microstructure
The microstructure of a polymer (sometimes called configuration) relates to the physical arrangement of monomer residues along the backbone of the chain. These are the elements of polymer structure that require the breaking of a covalent bond in order to change. Various polymer structures can be produced depending on the monomers and reaction conditions: A polymer may consist of linear macromolecules containing each only one unbranched chain. In the case of unbranched polyethylene, this chain is a long-chain n-alkane. There are also branched macromolecules with a main chain and side chains, in the case of polyethylene the side chains would be alkyl groups. In particular unbranched macromolecules can be in the solid state semi-crystalline, crystalline chain sections highlighted red in the figure below.
While branched and unbranched polymers are usually thermoplastics, many elastomers have a wide-meshed cross-linking between the "main chains". Close-meshed crosslinking, on the other hand, leads to thermosets. Cross-links and branches are shown as red dots in the figures. Highly branched polymers are amorphous and the molecules in the solid interact randomly.
{| class="wikitable" style="text-align:center; font-size:90%;" width="60%"
|- class="hintergrundfarbe2"
| Linear, unbranched macromolecule
| Branched macromolecule
|Semi-crystalline structure of an unbranched polymer
| Slightly cross-linked polymer (elastomer)
| Highly cross-linked polymer (thermoset)
|}
Polymer architecture
An important microstructural feature of a polymer is its architecture and shape, which relates to the way branch points lead to a deviation from a simple linear chain. A branched polymer molecule is composed of a main chain with one or more substituent side chains or branches. Types of branched polymers include star polymers, comb polymers, polymer brushes, dendronized polymers, ladder polymers, and dendrimers. There exist also two-dimensional polymers (2DP) which are composed of topologically planar repeat units. A polymer's architecture affects many of its physical properties including solution viscosity, melt viscosity, solubility in various solvents, glass-transition temperature and the size of individual polymer coils in solution. A variety of techniques may be employed for the synthesis of a polymeric material with a range of architectures, for example living polymerization.
Chain length
A common means of expressing the length of a chain is the degree of polymerization, which quantifies the number of monomers incorporated into the chain. As with other molecules, a polymer's size may also be expressed in terms of molecular weight. Since synthetic polymerization techniques typically yield a statistical distribution of chain lengths, the molecular weight is expressed in terms of weighted averages. The number-average molecular weight (Mn) and weight-average molecular weight (Mw) are most commonly reported. The ratio of these two values (Mw / Mn) is the dispersity (Đ), which is commonly used to express the width of the molecular weight distribution.
The physical properties of polymer strongly depend on the length (or equivalently, the molecular weight) of the polymer chain. One important example of the physical consequences of the molecular weight is the scaling of the viscosity (resistance to flow) in the melt. The influence of the weight-average molecular weight () on the melt viscosity () depends on whether the polymer is above or below the onset of entanglements. Below the entanglement molecular weight, , whereas above the entanglement molecular weight, . In the latter case, increasing the polymer chain length 10-fold would increase the viscosity over 1000 times. Increasing chain length furthermore tends to decrease chain mobility, increase strength and toughness, and increase the glass-transition temperature (Tg). This is a result of the increase in chain interactions such as van der Waals attractions and entanglements that come with increased chain length. These interactions tend to fix the individual chains more strongly in position and resist deformations and matrix breakup, both at higher stresses and higher temperatures.
Monomer arrangement in copolymers
Copolymers are classified either as statistical copolymers, alternating copolymers, block copolymers, graft copolymers or gradient copolymers. In the schematic figure below, Ⓐ and Ⓑ symbolize the two repeat units.
{| class="wikitable" style="text-align:center; font-size:90%;"
|- class="hintergrundfarbe2"
| Random copolymer
| Gradient copolymer
| rowspan="2" | Graft copolymer
|- class="hintergrundfarbe2"
| Alternating copolymer
| Block copolymer
|}
Alternating copolymers possess two regularly alternating monomer residues: . An example is the equimolar copolymer of styrene and maleic anhydride formed by free-radical chain-growth polymerization. A step-growth copolymer such as Nylon 66 can also be considered a strictly alternating copolymer of diamine and diacid residues, but is often described as a homopolymer with the dimeric residue of one amine and one acid as a repeat unit.
Periodic copolymers have more than two species of monomer units in a regular sequence.
Statistical copolymers have monomer residues arranged according to a statistical rule. A statistical copolymer in which the probability of finding a particular type of monomer residue at a particular point in the chain is independent of the types of surrounding monomer residue may be referred to as a truly random copolymer. For example, the chain-growth copolymer of vinyl chloride and vinyl acetate is random.
Block copolymers have long sequences of different monomer units. Polymers with two or three blocks of two distinct chemical species (e.g., A and B) are called diblock copolymers and triblock copolymers, respectively. Polymers with three blocks, each of a different chemical species (e.g., A, B, and C) are termed triblock terpolymers.
Graft or grafted copolymers contain side chains or branches whose repeat units have a different composition or configuration than the main chain. The branches are added on to a preformed main chain macromolecule.
Monomers within a copolymer may be organized along the backbone in a variety of ways. A copolymer containing a controlled arrangement of monomers is called a sequence-controlled polymer. Alternating, periodic and block copolymers are simple examples of sequence-controlled polymers.
Tacticity
Tacticity describes the relative stereochemistry of chiral centers in neighboring structural units within a macromolecule. There are three types of tacticity: isotactic (all substituents on the same side), atactic (random placement of substituents), and syndiotactic (alternating placement of substituents).
{| class="wikitable" style="text-align:center; font-size:90%;" width="60%"
|- class="hintergrundfarbe2"
|Isotactic
| Syndiotactic
| Atactic (i. e. random)
|}
Morphology
Polymer morphology generally describes the arrangement and microscale ordering of polymer chains in space. The macroscopic physical properties of a polymer are related to the interactions between the polymer chains.
Disordered polymers: In the solid state, atactic polymers, polymers with a high degree of branching and random copolymers form amorphous (i.e. glassy structures). In melt and solution, polymers tend to form a constantly changing "statistical cluster", see freely-jointed-chain model. In the solid state, the respective conformations of the molecules are frozen. Hooking and entanglement of chain molecules lead to a "mechanical bond" between the chains. Intermolecular and intramolecular attractive forces only occur at sites where molecule segments are close enough to each other. The irregular structures of the molecules prevent a narrower arrangement.
Linear polymers with periodic structure, low branching and stereoregularity (e. g. not atactic) have a semi-crystalline structure in the solid state. In simple polymers (such as polyethylene), the chains are present in the crystal in zigzag conformation. Several zigzag conformations form dense chain packs, called crystallites or lamellae. The lamellae are much thinner than the polymers are long (often about 10 nm). They are formed by more or less regular folding of one or more molecular chains. Amorphous structures exist between the lamellae. Individual molecules can lead to entanglements between the lamellae and can also be involved in the formation of two (or more) lamellae (chains than called tie molecules). Several lamellae form a superstructure, a spherulite, often with a diameter in the range of 0.05 to 1 mm.
The type and arrangement of (functional) residues of the repeat units effects or determines the crystallinity and strength of the secondary valence bonds. In isotactic polypropylene, the molecules form a helix. Like the zigzag conformation, such helices allow a dense chain packing. Particularly strong intermolecular interactions occur when the residues of the repeating units allow the formation of hydrogen bonds, as in the case of p-aramid. The formation of strong intramolecular associations may produce diverse folded states of single linear chains with distinct circuit topology. Crystallinity and superstructure are always dependent on the conditions of their formation, see also: crystallization of polymers. Compared to amorphous structures, semi-crystalline structures lead to a higher stiffness, density, melting temperature and higher resistance of a polymer.
Cross-linked polymers: Wide-meshed cross-linked polymers are elastomers and cannot be molten (unlike thermoplastics); heating cross-linked polymers only leads to decomposition. Thermoplastic elastomers, on the other hand, are reversibly "physically crosslinked" and can be molten. Block copolymers in which a hard segment of the polymer has a tendency to crystallize and a soft segment has an amorphous structure are one type of thermoplastic elastomers: the hard segments ensure wide-meshed, physical crosslinking.
Crystallinity
When applied to polymers, the term crystalline has a somewhat ambiguous usage. In some cases, the term crystalline finds identical usage to that used in conventional crystallography. For example, the structure of a crystalline protein or polynucleotide, such as a sample prepared for x-ray crystallography, may be defined in terms of a conventional unit cell composed of one or more polymer molecules with cell dimensions of hundreds of angstroms or more. A synthetic polymer may be loosely described as crystalline if it contains regions of three-dimensional ordering on atomic (rather than macromolecular) length scales, usually arising from intramolecular folding or stacking of adjacent chains. Synthetic polymers may consist of both crystalline and amorphous regions; the degree of crystallinity may be expressed in terms of a weight fraction or volume fraction of crystalline material. Few synthetic polymers are entirely crystalline. The crystallinity of polymers is characterized by their degree of crystallinity, ranging from zero for a completely non-crystalline polymer to one for a theoretical completely crystalline polymer. Polymers with microcrystalline regions are generally tougher (can be bent more without breaking) and more impact-resistant than totally amorphous polymers. Polymers with a degree of crystallinity approaching zero or one will tend to be transparent, while polymers with intermediate degrees of crystallinity will tend to be opaque due to light scattering by crystalline or glassy regions. For many polymers, crystallinity may also be associated with decreased transparency.
Chain conformation
The space occupied by a polymer molecule is generally expressed in terms of radius of gyration, which is an average distance from the center of mass of the chain to the chain itself. Alternatively, it may be expressed in terms of pervaded volume, which is the volume spanned by the polymer chain and scales with the cube of the radius of gyration.
The simplest theoretical models for polymers in the molten, amorphous state are ideal chains.
Properties
Polymer properties depend of their structure and they are divided into classes according to their physical bases. Many physical and chemical properties describe how a polymer behaves as a continuous macroscopic material. They are classified as bulk properties, or intensive properties according to thermodynamics.
Mechanical properties
The bulk properties of a polymer are those most often of end-use interest. These are the properties that dictate how the polymer actually behaves on a macroscopic scale.
Tensile strength
The tensile strength of a material quantifies how much elongating stress the material will endure before failure. This is very important in applications that rely upon a polymer's physical strength or durability. For example, a rubber band with a higher tensile strength will hold a greater weight before snapping. In general, tensile strength increases with polymer chain length and crosslinking of polymer chains.
Young's modulus of elasticity
Young's modulus quantifies the elasticity of the polymer. It is defined, for small strains, as the ratio of rate of change of stress to strain. Like tensile strength, this is highly relevant in polymer applications involving the physical properties of polymers, such as rubber bands. The modulus is strongly dependent on temperature. Viscoelasticity describes a complex time-dependent elastic response, which will exhibit hysteresis in the stress-strain curve when the load is removed. Dynamic mechanical analysis or DMA measures this complex modulus by oscillating the load and measuring the resulting strain as a function of time.
Transport properties
Transport properties such as diffusivity describe how rapidly molecules move through the polymer matrix. These are very important in many applications of polymers for films and membranes.
The movement of individual macromolecules occurs by a process called reptation in which each chain molecule is constrained by entanglements with neighboring chains to move within a virtual tube. The theory of reptation can explain polymer molecule dynamics and viscoelasticity.
Phase behavior
Crystallization and melting
Depending on their chemical structures, polymers may be either semi-crystalline or amorphous. Semi-crystalline polymers can undergo crystallization and melting transitions, whereas amorphous polymers do not. In polymers, crystallization and melting do not suggest solid-liquid phase transitions, as in the case of water or other molecular fluids. Instead, crystallization and melting refer to the phase transitions between two solid states (i.e., semi-crystalline and amorphous). Crystallization occurs above the glass-transition temperature (Tg) and below the melting temperature (Tm).
Glass transition
All polymers (amorphous or semi-crystalline) go through glass transitions. The glass-transition temperature (Tg) is a crucial physical parameter for polymer manufacturing, processing, and use. Below Tg, molecular motions are frozen and polymers are brittle and glassy. Above Tg, molecular motions are activated and polymers are rubbery and viscous. The glass-transition temperature may be engineered by altering the degree of branching or crosslinking in the polymer or by the addition of plasticizers.
Whereas crystallization and melting are first-order phase transitions, the glass transition is not. The glass transition shares features of second-order phase transitions (such as discontinuity in the heat capacity, as shown in the figure), but it is generally not considered a thermodynamic transition between equilibrium states.
Mixing behavior
In general, polymeric mixtures are far less miscible than mixtures of small molecule materials. This effect results from the fact that the driving force for mixing is usually entropy, not interaction energy. In other words, miscible materials usually form a solution not because their interaction with each other is more favorable than their self-interaction, but because of an increase in entropy and hence free energy associated with increasing the amount of volume available to each component. This increase in entropy scales with the number of particles (or moles) being mixed. Since polymeric molecules are much larger and hence generally have much higher specific volumes than small molecules, the number of molecules involved in a polymeric mixture is far smaller than the number in a small molecule mixture of equal volume. The energetics of mixing, on the other hand, is comparable on a per volume basis for polymeric and small molecule mixtures. This tends to increase the free energy of mixing for polymer solutions and thereby making solvation less favorable, and thereby making the availability of concentrated solutions of polymers far rarer than those of small molecules.
Furthermore, the phase behavior of polymer solutions and mixtures is more complex than that of small molecule mixtures. Whereas most small molecule solutions exhibit only an upper critical solution temperature phase transition (UCST), at which phase separation occurs with cooling, polymer mixtures commonly exhibit a lower critical solution temperature phase transition (LCST), at which phase separation occurs with heating.
In dilute solutions, the properties of the polymer are characterized by the interaction between the solvent and the polymer. In a good solvent, the polymer appears swollen and occupies a large volume. In this scenario, intermolecular forces between the solvent and monomer subunits dominate over intramolecular interactions. In a bad solvent or poor solvent, intramolecular forces dominate and the chain contracts. In the theta solvent, or the state of the polymer solution where the value of the second virial coefficient becomes 0, the intermolecular polymer-solvent repulsion balances exactly the intramolecular monomer-monomer attraction. Under the theta condition (also called the Flory condition), the polymer behaves like an ideal random coil. The transition between the states is known as a coil–globule transition.
Inclusion of plasticizers
Inclusion of plasticizers tends to lower Tg and increase polymer flexibility. Addition of the plasticizer will also modify dependence of the glass-transition temperature Tg on the cooling rate. The mobility of the chain can further change if the molecules of plasticizer give rise to hydrogen bonding formation. Plasticizers are generally small molecules that are chemically similar to the polymer and create gaps between polymer chains for greater mobility and fewer interchain interactions. A good example of the action of plasticizers is related to polyvinylchlorides or PVCs. A uPVC, or unplasticized polyvinylchloride, is used for things such as pipes. A pipe has no plasticizers in it, because it needs to remain strong and heat-resistant. Plasticized PVC is used in clothing for a flexible quality. Plasticizers are also put in some types of cling film to make the polymer more flexible.
Chemical properties
The attractive forces between polymer chains play a large part in determining the polymer's properties. Because polymer chains are so long, they have many such interchain interactions per molecule, amplifying the effect of these interactions on the polymer properties in comparison to attractions between conventional molecules. Different side groups on the polymer can lend the polymer to ionic bonding or hydrogen bonding between its own chains. These stronger forces typically result in higher tensile strength and higher crystalline melting points.
The intermolecular forces in polymers can be affected by dipoles in the monomer units. Polymers containing amide or carbonyl groups can form hydrogen bonds between adjacent chains; the partially positively charged hydrogen atoms in N-H groups of one chain are strongly attracted to the partially negatively charged oxygen atoms in C=O groups on another. These strong hydrogen bonds, for example, result in the high tensile strength and melting point of polymers containing urethane or urea linkages. Polyesters have dipole-dipole bonding between the oxygen atoms in C=O groups and the hydrogen atoms in H-C groups. Dipole bonding is not as strong as hydrogen bonding, so a polyester's melting point and strength are lower than Kevlar's (Twaron), but polyesters have greater flexibility. Polymers with non-polar units such as polyethylene interact only through weak Van der Waals forces. As a result, they typically have lower melting temperatures than other polymers.
When a polymer is dispersed or dissolved in a liquid, such as in commercial products like paints and glues, the chemical properties and molecular interactions influence how the solution flows and can even lead to self-assembly of the polymer into complex structures. When a polymer is applied as a coating, the chemical properties will influence the adhesion of the coating and how it interacts with external materials, such as superhydrophobic polymer coatings leading to water resistance. Overall the chemical properties of a polymer are important elements for designing new polymeric material products.
Optical properties
Polymers such as PMMA and HEMA:MMA are used as matrices in the gain medium of solid-state dye lasers, also known as solid-state dye-doped polymer lasers. These polymers have a high surface quality and are also highly transparent so that the laser properties are dominated by the laser dye used to dope the polymer matrix. These type of lasers, that also belong to the class of organic lasers, are known to yield very narrow linewidths which is useful for spectroscopy and analytical applications. An important optical parameter in the polymer used in laser applications is the change in refractive index with temperature
also known as dn/dT. For the polymers mentioned here the (dn/dT) ~ −1.4 × 10−4 in units of K−1 in the 297 ≤ T ≤ 337 K range.
Electrical properties
Most conventional polymers such as polyethylene are electrical insulators, but the development of polymers containing π-conjugated bonds has led to a wealth of polymer-based semiconductors, such as polythiophenes. This has led to many applications in the field of organic electronics.
Applications
Nowadays, synthetic polymers are used in almost all walks of life. Modern society would look very different without them. The spreading of polymer use is connected to their unique properties: low density, low cost, good thermal/electrical insulation properties, high resistance to corrosion, low-energy demanding polymer manufacture and facile processing into final products. For a given application, the properties of a polymer can be tuned or enhanced by combination with other materials, as in composites. Their application allows to save energy (lighter cars and planes, thermally insulated buildings), protect food and drinking water (packaging), save land and lower use of fertilizers (synthetic fibres), preserve other materials (coatings), protect and save lives (hygiene, medical applications). A representative, non-exhaustive list of applications is given below.
Clothing, sportswear and accessories: polyester and PVC clothing, spandex, sport shoes, wetsuits, footballs and billiard balls, skis and snowboards, rackets, parachutes, sails, tents and shelters.
Electronic and photonic technologies: organic field effect transistors (OFET), light emitting diodes (OLED) and solar cells, television components, compact discs (CD), photoresists, holography.
Packaging and containers: films, bottles, food packaging, barrels.
Insulation: electrical and thermal insulation, spray foams.
Construction and structural applications: garden furniture, PVC windows, flooring, sealing, pipes.
Paints, glues and lubricants: varnish, adhesives, dispersants, anti-graffiti coatings, antifouling coatings, non-stick surfaces, lubricants.
Car parts: tires, bumpers, windshields, windscreen wipers, fuel tanks, car seats.
Household items: buckets, kitchenware, toys (e.g., construction sets and Rubik's cube).
Medical applications: blood bag, syringes, rubber gloves, surgical suture, contact lenses, prosthesis, controlled drug delivery and release, matrices for cell growth.
Personal hygiene and healthcare: diapers using superabsorbent polymers, toothbrushes, cosmetics, shampoo, condoms.
Security: personal protective equipment, bulletproof vests, space suits, ropes.
Separation technologies: synthetic membranes, fuel cell membranes, filtration, ion-exchange resins.
Money: polymer banknotes and payment cards.
3D printing.
Standardized nomenclature
There are multiple conventions for naming polymer substances. Many commonly used polymers, such as those found in consumer products, are referred to by a common or trivial name. The trivial name is assigned based on historical precedent or popular usage rather than a standardized naming convention. Both the American Chemical Society (ACS) and IUPAC have proposed standardized naming conventions; the ACS and IUPAC conventions are similar but not identical. Examples of the differences between the various naming conventions are given in the table below:
In both standardized conventions, the polymers' names are intended to reflect the monomer(s) from which they are synthesized (source based nomenclature) rather than the precise nature of the repeating subunit. For example, the polymer synthesized from the simple alkene ethene is called polyethene, retaining the -ene suffix even though the double bond is removed during the polymerization process:
→
However, IUPAC structure based nomenclature is based on naming of the preferred constitutional repeating unit.
IUPAC has also issued guidelines for abbreviating new polymer names. 138 common polymer abbreviations are also standardized in the standard ISO 1043-1.
Characterization
Polymer characterization spans many techniques for determining the chemical composition, molecular weight distribution, and physical properties. Select common techniques include the following:
Size-exclusion chromatography (also called gel permeation chromatography), sometimes coupled with static light scattering, can used to determine the number-average molecular weight, weight-average molecular weight, and dispersity.
Scattering techniques, such as static light scattering and small-angle neutron scattering, are used to determine the dimensions (radius of gyration) of macromolecules in solution or in the melt. These techniques are also used to characterize the three-dimensional structure of microphase-separated block polymers, polymeric micelles, and other materials.
Wide-angle X-ray scattering (also called wide-angle X-ray diffraction) is used to determine the crystalline structure of polymers (or lack thereof).
Spectroscopy techniques, including Fourier-transform infrared spectroscopy, Raman spectroscopy, and nuclear magnetic resonance spectroscopy, can be used to determine the chemical composition.
Differential scanning calorimetry is used to characterize the thermal properties of polymers, such as the glass-transition temperature, crystallization temperature, and melting temperature. The glass-transition temperature can also be determined by dynamic mechanical analysis.
Thermogravimetry is a useful technique to evaluate the thermal stability of the polymer.
Rheology is used to characterize the flow and deformation behavior. It can be used to determine the viscosity, modulus, and other rheological properties. Rheology is also often used to determine the molecular architecture (molecular weight, molecular weight distribution, branching) and to understand how the polymer can be processed.
Degradation
Polymer degradation is a change in the properties—tensile strength, color, shape, or molecular weight—of a polymer or polymer-based product under the influence of one or more environmental factors, such as heat, light, and the presence of certain chemicals, oxygen, and enzymes. This change in properties is often the result of bond breaking in the polymer backbone (chain scission) which may occur at the chain ends or at random positions in the chain.
Although such changes are frequently undesirable, in some cases, such as biodegradation and recycling, they may be intended to prevent environmental pollution. Degradation can also be useful in biomedical settings. For example, a copolymer of polylactic acid and polyglycolic acid is employed in hydrolysable stitches that slowly degrade after they are applied to a wound.
The susceptibility of a polymer to degradation depends on its structure. Epoxies and chains containing aromatic functionalities are especially susceptible to UV degradation while polyesters are susceptible to degradation by hydrolysis. Polymers containing an unsaturated backbone degrade via ozone cracking. Carbon based polymers are more susceptible to thermal degradation than inorganic polymers such as polydimethylsiloxane and are therefore not ideal for most high-temperature applications.
The degradation of polyethylene occurs by random scission—a random breakage of the bonds that hold the atoms of the polymer together. When heated above 450 °C, polyethylene degrades to form a mixture of hydrocarbons. In the case of chain-end scission, monomers are released and this process is referred to as unzipping or depolymerization. Which mechanism dominates will depend on the type of polymer and temperature; in general, polymers with no or a single small substituent in the repeat unit will decompose via random-chain scission.
The sorting of polymer waste for recycling purposes may be facilitated by the use of the resin identification codes developed by the Society of the Plastics Industry to identify the type of plastic.
Product failure
Failure of safety-critical polymer components can cause serious accidents, such as fire in the case of cracked and degraded polymer fuel lines. Chlorine-induced cracking of acetal resin plumbing joints and polybutylene pipes has caused many serious floods in domestic properties, especially in the US in the 1990s. Traces of chlorine in the water supply attacked polymers present in the plumbing, a problem which occurs faster if any of the parts have been poorly extruded or injection molded. Attack of the acetal joint occurred because of faulty molding, leading to cracking along the threads of the fitting where there is stress concentration.
Polymer oxidation has caused accidents involving medical devices. One of the oldest known failure modes is ozone cracking caused by chain scission when ozone gas attacks susceptible elastomers, such as natural rubber and nitrile rubber. They possess double bonds in their repeat units which are cleaved during ozonolysis. Cracks in fuel lines can penetrate the bore of the tube and cause fuel leakage. If cracking occurs in the engine compartment, electric sparks can ignite the gasoline and can cause a serious fire. In medical use degradation of polymers can lead to changes of physical and chemical characteristics of implantable devices.
Nylon 66 is susceptible to acid hydrolysis, and in one accident, a fractured fuel line led to a spillage of diesel into the road. If diesel fuel leaks onto the road, accidents to following cars can be caused by the slippery nature of the deposit, which is like black ice. Furthermore, the asphalt concrete road surface will suffer damage as a result of the diesel fuel dissolving the asphaltenes from the composite material, this resulting in the degradation of the asphalt surface and structural integrity of the road.
History
Polymers have been essential components of commodities since the early days of humankind. The use of wool (keratin), cotton and linen fibres (cellulose) for garments, paper reed (cellulose) for paper are just a few examples of how ancient societies exploited polymer-containing raw materials to obtain artefacts. The latex sap of "caoutchouc" trees (natural rubber) reached Europe in the 16th century from South America long after the Olmec, Maya and Aztec had started using it as a material to make balls, waterproof textiles and containers.
The chemical manipulation of polymers dates back to the 19th century, although at the time the nature of these species was not understood. The behaviour of polymers was initially rationalised according to the theory proposed by Thomas Graham which considered them as colloidal aggregates of small molecules held together by unknown forces.
Notwithstanding the lack of theoretical knowledge, the potential of polymers to provide innovative, accessible and cheap materials was immediately grasped. The work carried out by Braconnot, Parkes, Ludersdorf, Hayward and many others on the modification of natural polymers determined many significant advances in the field. Their contributions led to the discovery of materials such as celluloid, galalith, parkesine, rayon, vulcanised rubber and, later, Bakelite: all materials that quickly entered industrial manufacturing processes and reached households as garments components (e.g., fabrics, buttons), crockery and decorative items.
In 1920, Hermann Staudinger published his seminal work "Über Polymerisation", in which he proposed that polymers were in fact long chains of atoms linked by covalent bonds. His work was debated at length, but eventually it was accepted by the scientific community. Because of this work, Staudinger was awarded the Nobel Prize in 1953.
After the 1930s polymers entered a golden age during which new types were discovered and quickly given commercial applications, replacing naturally-sourced materials. This development was fuelled by an industrial sector with a strong economic drive and it was supported by a broad academic community that contributed innovative syntheses of monomers from cheaper raw material, more efficient polymerisation processes, improved techniques for polymer characterisation and advanced, theoretical understanding of polymers.
Since 1953, six Nobel prizes have been awarded in the area of polymer science, excluding those for research on biological macromolecules. This further testifies to its impact on modern science and technology. As Lord Todd summarised in 1980, "I am inclined to think that the development of polymerization is perhaps the biggest thing that chemistry has done, where it has had the biggest effect on everyday life".
See also
Ideal chain
Catenation
Inorganic polymer
Important publications in polymer chemistry
Oligomer
Polymer adsorption
Polymer classes
Polymer engineering
Polymery (botany)
Reactive compatibilization
Sequence-controlled polymer
Shape-memory polymer
Sol–gel process
Supramolecular polymer
Thermoplastic
Thermosetting polymer
References
Bibliography
External links
Libretext in Polymer chemistry
How to Analyze Polymers Using X-ray Diffraction
The Macrogalleria
Introduction to Polymers
Glossary of Polymer Abbreviations
Polymer chemistry
Soft matter
Materials science | Polymer | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 9,115 | [
"Organic polymers",
"Applied and interdisciplinary physics",
"Soft matter",
"Materials science",
"Organic compounds",
"Condensed matter physics",
"Polymer chemistry",
"nan",
"Polymers"
] |
23,053 | https://en.wikipedia.org/wiki/Periodic%20table | The periodic table, also known as the periodic table of the elements, is an ordered arrangement of the chemical elements into rows ("periods") and columns ("groups"). It is an icon of chemistry and is widely used in physics and other sciences. It is a depiction of the periodic law, which states that when the elements are arranged in order of their atomic numbers an approximate recurrence of their properties is evident. The table is divided into four roughly rectangular areas called blocks. Elements in the same group tend to show similar chemical characteristics.
Vertical, horizontal and diagonal trends characterize the periodic table. Metallic character increases going down a group and from right to left across a period. Nonmetallic character increases going from the bottom left of the periodic table to the top right.
The first periodic table to become generally accepted was that of the Russian chemist Dmitri Mendeleev in 1869; he formulated the periodic law as a dependence of chemical properties on atomic mass. As not all elements were then known, there were gaps in his periodic table, and Mendeleev successfully used the periodic law to predict some properties of some of the missing elements. The periodic law was recognized as a fundamental discovery in the late 19th century. It was explained early in the 20th century, with the discovery of atomic numbers and associated pioneering work in quantum mechanics, both ideas serving to illuminate the internal structure of the atom. A recognisably modern form of the table was reached in 1945 with Glenn T. Seaborg's discovery that the actinides were in fact f-block rather than d-block elements. The periodic table and law are now a central and indispensable part of modern chemistry.
The periodic table continues to evolve with the progress of science. In nature, only elements up to atomic number 94 exist; to go further, it was necessary to synthesize new elements in the laboratory. By 2010, the first 118 elements were known, thereby completing the first seven rows of the table; however, chemical characterization is still needed for the heaviest elements to confirm that their properties match their positions. New discoveries will extend the table beyond these seven rows, though it is not yet known how many more elements are possible; moreover, theoretical calculations suggest that this unknown region will not follow the patterns of the known part of the table. Some scientific discussion also continues regarding whether some elements are correctly positioned in today's table. Many alternative representations of the periodic law exist, and there is some discussion as to whether there is an optimal form of the periodic table.
Structure
Each chemical element has a unique atomic number (Z for "Zahl", German for "number") representing the number of protons in its nucleus. Each distinct atomic number therefore corresponds to a class of atom: these classes are called the chemical elements. The chemical elements are what the periodic table classifies and organizes. Hydrogen is the element with atomic number 1; helium, atomic number 2; lithium, atomic number 3; and so on. Each of these names can be further abbreviated by a one- or two-letter chemical symbol; those for hydrogen, helium, and lithium are respectively H, He, and Li. Neutrons do not affect the atom's chemical identity, but do affect its weight. Atoms with the same number of protons but different numbers of neutrons are called isotopes of the same chemical element. Naturally occurring elements usually occur as mixes of different isotopes; since each isotope usually occurs with a characteristic abundance, naturally occurring elements have well-defined atomic weights, defined as the average mass of a naturally occurring atom of that element.
All elements have multiple isotopes, variants with the same number of protons but different numbers of neutrons. For example, carbon has three naturally occurring isotopes: all of its atoms have six protons and most have six neutrons as well, but about one per cent have seven neutrons, and a very small fraction have eight neutrons. Isotopes are never separated in the periodic table; they are always grouped together under a single element. When atomic mass is shown, it is usually the weighted average of naturally occurring isotopes; but if no isotopes occur naturally in significant quantities, the mass of the most stable isotope usually appears, often in parentheses.
In the standard periodic table, the elements are listed in order of increasing atomic number. A new row (period) is started when a new electron shell has its first electron. Columns (groups) are determined by the electron configuration of the atom; elements with the same number of electrons in a particular subshell fall into the same columns (e.g. oxygen, sulfur, and selenium are in the same column because they all have four electrons in the outermost p-subshell). Elements with similar chemical properties generally fall into the same group in the periodic table, although in the f-block, and to some respect in the d-block, the elements in the same period tend to have similar properties, as well. Thus, it is relatively easy to predict the chemical properties of an element if one knows the properties of the elements around it.
Today, 118 elements are known, the first 94 of which are known to occur naturally on Earth at present. The remaining 24, americium to oganesson (95–118), occur only when synthesized in laboratories. Of the 94 naturally occurring elements, 83 are primordial and 11 occur only in decay chains of primordial elements. A few of the latter are so rare that they were not discovered in nature, but were synthesized in the laboratory before it was determined that they do exist in nature after all: technetium (element 43), promethium (element 61), astatine (element 85), neptunium (element 93), and plutonium (element 94). No element heavier than einsteinium (element 99) has ever been observed in macroscopic quantities in its pure form, nor has astatine; francium (element 87) has been only photographed in the form of light emitted from microscopic quantities (300,000 atoms). Of the 94 natural elements, eighty have a stable isotope and one more (bismuth) has an almost-stable isotope (with a half-life of 2.01×1019 years, over a billion times the age of the universe). Two more, thorium and uranium, have isotopes undergoing radioactive decay with a half-life comparable to the age of the Earth. The stable elements plus bismuth, thorium, and uranium make up the 83 primordial elements that survived from the Earth's formation. The remaining eleven natural elements decay quickly enough that their continued trace occurrence rests primarily on being constantly regenerated as intermediate products of the decay of thorium and uranium. All 24 known artificial elements are radioactive.
Group names and numbers
Under an international naming convention, the groups are numbered numerically from 1 to 18 from the leftmost column (the alkali metals) to the rightmost column (the noble gases). The f-block groups are ignored in this numbering. Groups can also be named by their first element, e.g. the "scandium group" for group 3. Previously, groups were known by Roman numerals. In the United States, the Roman numerals were followed by either an "A" if the group was in the s- or p-block, or a "B" if the group was in the d-block. The Roman numerals used correspond to the last digit of today's naming convention (e.g. the group 4 elements were group IVB, and the group 14 elements were group IVA). In Europe, the lettering was similar, except that "A" was used for groups 1 through 7, and "B" was used for groups 11 through 17. In addition, groups 8, 9 and 10 used to be treated as one triple-sized group, known collectively in both notations as group VIII. In 1988, the new IUPAC (International Union of Pure and Applied Chemistry) naming system (1–18) was put into use, and the old group names (I–VIII) were deprecated.
Presentation forms
32 columns
18 columns
For reasons of space, the periodic table is commonly presented with the f-block elements cut out and positioned as a distinct part below the main body. This reduces the number of element columns from 32 to 18.
Both forms represent the same periodic table. The form with the f-block included in the main body is sometimes called the 32-column or long form; the form with the f-block cut out the 18-column or medium-long form. The 32-column form has the advantage of showing all elements in their correct sequence, but it has the disadvantage of requiring more space. The form chosen is an editorial choice, and does not imply any change of scientific claim or statement. For example, when discussing the composition of group 3, the options can be shown equally (unprejudiced) in both forms.
Periodic tables usually at least show the elements' symbols; many also provide supplementary information about the elements, either via colour-coding or as data in the cells. The above table shows the names and atomic numbers of the elements, and also their blocks, natural occurrences and standard atomic weights. For the short-lived elements without standard atomic weights, the mass number of the most stable known isotope is used instead. Other tables may include properties such as state of matter, melting and boiling points, densities, as well as provide different classifications of the elements.
Electron configurations
The periodic table is a graphic description of the periodic law, which states that the properties and atomic structures of the chemical elements are a periodic function of their atomic number. Elements are placed in the periodic table according to their electron configurations, the periodic recurrences of which explain the trends in properties across the periodic table.
An electron can be thought of as inhabiting an atomic orbital, which characterizes the probability it can be found in any particular region around the atom. Their energies are quantised, which is to say that they can only take discrete values. Furthermore, electrons obey the Pauli exclusion principle: different electrons must always be in different states. This allows classification of the possible states an electron can take in various energy levels known as shells, divided into individual subshells, which each contain one or more orbitals. Each orbital can contain up to two electrons: they are distinguished by a quantity known as spin, conventionally labelled "up" or "down". In a cold atom (one in its ground state), electrons arrange themselves in such a way that the total energy they have is minimized by occupying the lowest-energy orbitals available. Only the outermost electrons (so-called valence electrons) have enough energy to break free of the nucleus and participate in chemical reactions with other atoms. The others are called core electrons.
Elements are known with up to the first seven shells occupied. The first shell contains only one orbital, a spherical s orbital. As it is in the first shell, this is called the 1s orbital. This can hold up to two electrons. The second shell similarly contains a 2s orbital, and it also contains three dumbbell-shaped 2p orbitals, and can thus fill up to eight electrons (2×1 + 2×3 = 8). The third shell contains one 3s orbital, three 3p orbitals, and five 3d orbitals, and thus has a capacity of 2×1 + 2×3 + 2×5 = 18. The fourth shell contains one 4s orbital, three 4p orbitals, five 4d orbitals, and seven 4f orbitals, thus leading to a capacity of 2×1 + 2×3 + 2×5 + 2×7 = 32. Higher shells contain more types of orbitals that continue the pattern, but such types of orbitals are not filled in the ground states of known elements. The subshell types are characterized by the quantum numbers. Four numbers describe an orbital in an atom completely: the principal quantum number n, the azimuthal quantum number ℓ (the orbital type), the orbital magnetic quantum number mℓ, and the spin magnetic quantum number ms.
Order of subshell filling
The sequence in which the subshells are filled is given in most cases by the Aufbau principle, also known as the Madelung or Klechkovsky rule (after Erwin Madelung and Vsevolod Klechkovsky respectively). This rule was first observed empirically by Madelung, and Klechkovsky and later authors gave it theoretical justification. The shells overlap in energies, and the Madelung rule specifies the sequence of filling according to:
1s ≪ 2s < 2p ≪ 3s < 3p ≪ 4s < 3d < 4p ≪ 5s < 4d < 5p ≪ 6s < 4f < 5d < 6p ≪ 7s < 5f < 6d < 7p ≪ ...
Here the sign ≪ means "much less than" as opposed to < meaning just "less than". Phrased differently, electrons enter orbitals in order of increasing n + ℓ, and if two orbitals are available with the same value of n + ℓ, the one with lower n is occupied first. In general, orbitals with the same value of n + ℓ are similar in energy, but in the case of the s-orbitals (with ℓ = 0), quantum effects raise their energy to approach that of the next n + ℓ group. Hence the periodic table is usually drawn to begin each row (often called a period) with the filling of a new s-orbital, which corresponds to the beginning of a new shell. Thus, with the exception of the first row, each period length appears twice:
2, 8, 8, 18, 18, 32, 32, ...
The overlaps get quite close at the point where the d-orbitals enter the picture, and the order can shift slightly with atomic number and atomic charge.
Starting from the simplest atom, this lets us build up the periodic table one at a time in order of atomic number, by considering the cases of single atoms. In hydrogen, there is only one electron, which must go in the lowest-energy orbital 1s. This electron configuration is written 1s1, where the superscript indicates the number of electrons in the subshell. Helium adds a second electron, which also goes into 1s, completely filling the first shell and giving the configuration 1s2.
Starting from the third element, lithium, the first shell is full, so its third electron occupies a 2s orbital, giving a 1s2 2s1 configuration. The 2s electron is lithium's only valence electron, as the 1s subshell is now too tightly bound to the nucleus to participate in chemical bonding to other atoms: such a shell is called a "core shell". The 1s subshell is a core shell for all elements from lithium onward. The 2s subshell is completed by the next element beryllium (1s2 2s2). The following elements then proceed to fill the 2p subshell. Boron (1s2 2s2 2p1) puts its new electron in a 2p orbital; carbon (1s2 2s2 2p2) fills a second 2p orbital; and with nitrogen (1s2 2s2 2p3) all three 2p orbitals become singly occupied. This is consistent with Hund's rule, which states that atoms usually prefer to singly occupy each orbital of the same type before filling them with the second electron. Oxygen (1s2 2s2 2p4), fluorine (1s2 2s2 2p5), and neon (1s2 2s2 2p6) then complete the already singly filled 2p orbitals; the last of these fills the second shell completely.
Starting from element 11, sodium, the second shell is full, making the second shell a core shell for this and all heavier elements. The eleventh electron begins the filling of the third shell by occupying a 3s orbital, giving a configuration of 1s2 2s2 2p6 3s1 for sodium. This configuration is abbreviated [Ne] 3s1, where [Ne] represents neon's configuration. Magnesium ([Ne] 3s2) finishes this 3s orbital, and the following six elements aluminium, silicon, phosphorus, sulfur, chlorine, and argon fill the three 3p orbitals ([Ne] 3s2 3p1 through [Ne] 3s2 3p6). This creates an analogous series in which the outer shell structures of sodium through argon are analogous to those of lithium through neon, and is the basis for the periodicity of chemical properties that the periodic table illustrates: at regular but changing intervals of atomic numbers, the properties of the chemical elements approximately repeat.
The first 18 elements can thus be arranged as the start of a periodic table. Elements in the same column have the same number of valence electrons and have analogous valence electron configurations: these columns are called groups. The single exception is helium, which has two valence electrons like beryllium and magnesium, but is typically placed in the column of neon and argon to emphasise that its outer shell is full. (Some contemporary authors question even this single exception, preferring to consistently follow the valence configurations and place helium over beryllium.) There are eight columns in this periodic table fragment, corresponding to at most eight outer-shell electrons. A period begins when a new shell starts filling. Finally, the colouring illustrates the blocks: the elements in the s-block (coloured red) are filling s-orbitals, while those in the p-block (coloured yellow) are filling p-orbitals.
Starting the next row, for potassium and calcium the 4s subshell is the lowest in energy, and therefore they fill it. Potassium adds one electron to the 4s shell ([Ar] 4s1), and calcium then completes it ([Ar] 4s2). However, starting from scandium ([Ar] 3d1 4s2) the 3d subshell becomes the next highest in energy. The 4s and 3d subshells have approximately the same energy and they compete for filling the electrons, and so the occupation is not quite consistently filling the 3d orbitals one at a time. The precise energy ordering of 3d and 4s changes along the row, and also changes depending on how many electrons are removed from the atom. For example, due to the repulsion between the 3d electrons and the 4s ones, at chromium the 4s energy level becomes slightly higher than 3d, and so it becomes more profitable for a chromium atom to have a [Ar] 3d5 4s1 configuration than an [Ar] 3d4 4s2 one. A similar anomaly occurs at copper, whose atom has a [Ar] 3d10 4s1 configuration rather than the expected [Ar] 3d9 4s2. These are violations of the Madelung rule. Such anomalies, however, do not have any chemical significance: most chemistry is not about isolated gaseous atoms, and the various configurations are so close in energy to each other that the presence of a nearby atom can shift the balance. Therefore, the periodic table ignores them and considers only idealized configurations.
At zinc ([Ar] 3d10 4s2), the 3d orbitals are completely filled with a total of ten electrons. Next come the 4p orbitals, completing the row, which are filled progressively by gallium ([Ar] 3d10 4s2 4p1) through krypton ([Ar] 3d10 4s2 4p6), in a manner analogous to the previous p-block elements. From gallium onwards, the 3d orbitals form part of the electronic core, and no longer participate in chemistry. The s- and p-block elements, which fill their outer shells, are called main-group elements; the d-block elements (coloured blue below), which fill an inner shell, are called transition elements (or transition metals, since they are all metals).
The next 18 elements fill the 5s orbitals (rubidium and strontium), then 4d (yttrium through cadmium, again with a few anomalies along the way), and then 5p (indium through xenon). Again, from indium onward the 4d orbitals are in the core. Hence the fifth row has the same structure as the fourth.
The sixth row of the table likewise starts with two s-block elements: caesium and barium. After this, the first f-block elements (coloured green below) begin to appear, starting with lanthanum. These are sometimes termed inner transition elements. As there are now not only 4f but also 5d and 6s subshells at similar energies, competition occurs once again with many irregular configurations; this resulted in some dispute about where exactly the f-block is supposed to begin, but most who study the matter agree that it starts at lanthanum in accordance with the Aufbau principle. Even though lanthanum does not itself fill the 4f subshell as a single atom, because of repulsion between electrons, its 4f orbitals are low enough in energy to participate in chemistry. At ytterbium, the seven 4f orbitals are completely filled with fourteen electrons; thereafter, a series of ten transition elements (lutetium through mercury) follows, and finally six main-group elements (thallium through radon) complete the period. From lutetium onwards the 4f orbitals are in the core, and from thallium onwards so are the 5d orbitals.
The seventh row is analogous to the sixth row: 7s fills (francium and radium), then 5f (actinium to nobelium), then 6d (lawrencium to copernicium), and finally 7p (nihonium to oganesson). Starting from lawrencium the 5f orbitals are in the core, and probably the 6d orbitals join the core starting from nihonium. Again there are a few anomalies along the way: for example, as single atoms neither actinium nor thorium actually fills the 5f subshell, and lawrencium does not fill the 6d shell, but all these subshells can still become filled in chemical environments. For a very long time, the seventh row was incomplete as most of its elements do not occur in nature. The missing elements beyond uranium started to be synthesized in the laboratory in 1940, when neptunium was made. (However, the first element to be discovered by synthesis rather than in nature was technetium in 1937.) The row was completed with the synthesis of tennessine in 2010 (the last element oganesson had already been made in 2002), and the last elements in this seventh row were given names in 2016.
This completes the modern periodic table, with all seven rows completely filled to capacity.
Electron configuration table
The following table shows the electron configuration of a neutral gas-phase atom of each element. Different configurations can be favoured in different chemical environments. The main-group elements have entirely regular electron configurations; the transition and inner transition elements show twenty irregularities due to the aforementioned competition between subshells close in energy level. For the last ten elements (109–118), experimental data is lacking and therefore calculated configurations have been shown instead. Completely filled subshells have been greyed out.
Variations
Period 1
Although the modern periodic table is standard today, the placement of the period 1 elements hydrogen and helium remains an open issue under discussion, and some variation can be found. Following their respective s1 and s2 electron configurations, hydrogen would be placed in group 1, and helium would be placed in group 2. The group 1 placement of hydrogen is common, but helium is almost always placed in group 18 with the other noble gases. The debate has to do with conflicting understandings of the extent to which chemical or electronic properties should decide periodic table placement.
Like the group 1 metals, hydrogen has one electron in its outermost shell and typically loses its only electron in chemical reactions. Hydrogen has some metal-like chemical properties, being able to displace some metals from their salts. But it forms a diatomic nonmetallic gas at standard conditions, unlike the alkali metals which are reactive solid metals. This and hydrogen's formation of hydrides, in which it gains an electron, brings it close to the properties of the halogens which do the same (though it is rarer for hydrogen to form H− than H+). Moreover, the lightest two halogens (fluorine and chlorine) are gaseous like hydrogen at standard conditions. Some properties of hydrogen are not a good fit for either group: hydrogen is neither highly oxidizing nor highly reducing and is not reactive with water. Hydrogen thus has properties corresponding to both those of the alkali metals and the halogens, but matches neither group perfectly, and is thus difficult to place by its chemistry. Therefore, while the electronic placement of hydrogen in group 1 predominates, some rarer arrangements show either hydrogen in group 17, duplicate hydrogen in both groups 1 and 17, or float it separately from all groups. This last option has nonetheless been criticized by the chemist and philosopher of science Eric Scerri on the grounds that it appears to imply that hydrogen is above the periodic law altogether, unlike all the other elements.
Helium is the only element that routinely occupies a position in the periodic table that is not consistent with its electronic structure. It has two electrons in its outermost shell, whereas the other noble gases have eight; and it is an s-block element, whereas all other noble gases are p-block elements. However it is unreactive at standard conditions, and has a full outer shell: these properties are like the noble gases in group 18, but not at all like the reactive alkaline earth metals of group 2. For these reasons helium is nearly universally placed in group 18 which its properties best match; a proposal to move helium to group 2 was rejected by IUPAC in 1988 for these reasons. Nonetheless, helium is still occasionally placed in group 2 today, and some of its physical and chemical properties are closer to the group 2 elements and support the electronic placement. Solid helium crystallises in a hexagonal close-packed structure, which matches beryllium and magnesium in group 2, but not the other noble gases in group 18. Recent theoretical developments in noble gas chemistry, in which helium is expected to show slightly less inertness than neon and to form (HeO)(LiF)2 with a structure similar to the analogous beryllium compound (but with no expected neon analogue), have resulted in more chemists advocating a placement of helium in group 2. This relates to the electronic argument, as the reason for neon's greater inertness is repulsion from its filled p-shell that helium lacks, though realistically it is unlikely that helium-containing molecules will be stable outside extreme low-temperature conditions (around 10 K).
The first-row anomaly in the periodic table has additionally been cited to support moving helium to group 2. It arises because the first orbital of any type is unusually small, since unlike its higher analogues, it does not experience interelectronic repulsion from a smaller orbital of the same type. This makes the first row of elements in each block unusually small, and such elements tend to exhibit characteristic kinds of anomalies for their group. Some chemists arguing for the repositioning of helium have pointed out that helium exhibits these anomalies if it is placed in group 2, but not if it is placed in group 18: on the other hand, neon, which would be the first group 18 element if helium was removed from that spot, does exhibit those anomalies. The relationship between helium and beryllium is then argued to resemble that between hydrogen and lithium, a placement which is much more commonly accepted. For example, because of this trend in the sizes of orbitals, a large difference in atomic radii between the first and second members of each main group is seen in groups 1 and 13–17: it exists between neon and argon, and between helium and beryllium, but not between helium and neon. This similarly affects the noble gases' boiling points and solubilities in water, where helium is too close to neon, and the large difference characteristic between the first two elements of a group appears only between neon and argon. Moving helium to group 2 makes this trend consistent in groups 2 and 18 as well, by making helium the first group 2 element and neon the first group 18 element: both exhibit the characteristic properties of a kainosymmetric first element of a group. The group 18 placement of helium nonetheless remains near-universal due to its extreme inertness. Additionally, tables that float both hydrogen and helium outside all groups may rarely be encountered.
Group 3
In many periodic tables, the f-block is shifted one element to the right, so that lanthanum and actinium become d-block elements in group 3, and Ce–Lu and Th–Lr form the f-block. Thus the d-block is split into two very uneven portions. This is a holdover from early mistaken measurements of electron configurations; modern measurements are more consistent with the form with lutetium and lawrencium in group 3, and with La–Yb and Ac–No as the f-block.
The 4f shell is completely filled at ytterbium, and for that reason Lev Landau and Evgeny Lifshitz in 1948 considered it incorrect to group lutetium as an f-block element. They did not yet take the step of removing lanthanum from the d-block as well, but Jun Kondō realized in 1963 that lanthanum's low-temperature superconductivity implied the activity of its 4f shell. In 1965, David C. Hamilton linked this observation to its position in the periodic table, and argued that the f-block should be composed of the elements La–Yb and Ac–No. Since then, physical, chemical, and electronic evidence has supported this assignment. The issue was brought to wide attention by William B. Jensen in 1982, and the reassignment of lutetium and lawrencium to group 3 was supported by IUPAC reports dating from 1988 (when the 1–18 group numbers were recommended) and 2021. The variation nonetheless still exists because most textbook writers are not aware of the issue.
A third form can sometimes be encountered in which the spaces below yttrium in group 3 are left empty, such as the table appearing on the IUPAC web site, but this creates an inconsistency with quantum mechanics by making the f-block 15 elements wide (La–Lu and Ac–Lr) even though only 14 electrons can fit in an f-subshell. There is moreover some confusion in the literature on which elements are then implied to be in group 3. While the 2021 IUPAC report noted that 15-element-wide f-blocks are supported by some practitioners of a specialized branch of relativistic quantum mechanics focusing on the properties of superheavy elements, the project's opinion was that such interest-dependent concerns should not have any bearing on how the periodic table is presented to "the general chemical and scientific community". Other authors focusing on superheavy elements since clarified that the "15th entry of the f-block represents the first slot of the d-block which is left vacant to indicate the place of the f-block inserts", which would imply that this form still has lutetium and lawrencium (the 15th entries in question) as d-block elements in group 3. Indeed, when IUPAC publications expand the table to 32 columns, they make this clear and place lutetium and lawrencium under yttrium in group 3.
Several arguments in favour of Sc-Y-La-Ac can be encountered in the literature, but they have been challenged as being logically inconsistent. For example, it has been argued that lanthanum and actinium cannot be f-block elements because as individual gas-phase atoms, they have not begun to fill the f-subshells. But the same is true of thorium which is never disputed as an f-block element, and this argument overlooks the problem on the other end: that the f-shells complete filling at ytterbium and nobelium, matching the Sc-Y-Lu-Lr form, and not at lutetium and lawrencium as the Sc-Y-La-Ac form would have it. Not only are such exceptional configurations in the minority, but they have also in any case never been considered as relevant for positioning any other elements on the periodic table: in gaseous atoms, the d-shells complete their filling at copper, palladium, and gold, but it is universally accepted by chemists that these configurations are exceptional and that the d-block really ends in accordance with the Madelung rule at zinc, cadmium, and mercury. The relevant fact for placement is that lanthanum and actinium (like thorium) have valence f-orbitals that can become occupied in chemical environments, whereas lutetium and lawrencium do not: their f-shells are in the core, and cannot be used for chemical reactions. Thus the relationship between yttrium and lanthanum is only a secondary relationship between elements with the same number of valence electrons but different kinds of valence orbitals, such as that between chromium and uranium; whereas the relationship between yttrium and lutetium is primary, sharing both valence electron count and valence orbital type.
Periodic trends
As chemical reactions involve the valence electrons, elements with similar outer electron configurations may be expected to react similarly and form compounds with similar proportions of elements in them. Such elements are placed in the same group, and thus there tend to be clear similarities and trends in chemical behaviour as one proceeds down a group. As analogous configurations occur at regular intervals, the properties of the elements thus exhibit periodic recurrences, hence the name of the periodic table and the periodic law. These periodic recurrences were noticed well before the underlying theory that explains them was developed.
Atomic radius
Historically, the physical size of atoms was unknown until the early 20th century. The first calculated estimate of the atomic radius of hydrogen was published by physicist Arthur Haas in 1910 to within an order of magnitude (a factor of 10) of the accepted value, the Bohr radius (~0.529 Å). In his model, Haas used a single-electron configuration based on the classical atomic model proposed by J. J. Thomson in 1904, often called the plum-pudding model.
Atomic radii (the size of atoms) are dependent on the sizes of their outermost orbitals. They generally decrease going left to right along the main-group elements, because the nuclear charge increases but the outer electrons are still in the same shell. However, going down a column, the radii generally increase, because the outermost electrons are in higher shells that are thus further away from the nucleus. The first row of each block is abnormally small, due to an effect called kainosymmetry or primogenic repulsion: the 1s, 2p, 3d, and 4f subshells have no inner analogues. For example, the 2p orbitals do not experience strong repulsion from the 1s and 2s orbitals, which have quite different angular charge distributions, and hence are not very large; but the 3p orbitals experience strong repulsion from the 2p orbitals, which have similar angular charge distributions. Thus higher s-, p-, d-, and f-subshells experience strong repulsion from their inner analogues, which have approximately the same angular distribution of charge, and must expand to avoid this. This makes significant differences arise between the small 2p elements, which prefer multiple bonding, and the larger 3p and higher p-elements, which do not. Similar anomalies arise for the 1s, 2p, 3d, 4f, and the hypothetical elements: the degree of this first-row anomaly is highest for the s-block, is moderate for the p-block, and is less pronounced for the d- and f-blocks.
In the transition elements, an inner shell is filling, but the size of the atom is still determined by the outer electrons. The increasing nuclear charge across the series and the increased number of inner electrons for shielding somewhat compensate each other, so the decrease in radius is smaller. The 4p and 5d atoms, coming immediately after new types of transition series are first introduced, are smaller than would have been expected, because the added core 3d and 4f subshells provide only incomplete shielding of the nuclear charge for the outer electrons. Hence for example gallium atoms are slightly smaller than aluminium atoms. Together with kainosymmetry, this results in an even-odd difference between the periods (except in the s-block) that is sometimes known as secondary periodicity: elements in even periods have smaller atomic radii and prefer to lose fewer electrons, while elements in odd periods (except the first) differ in the opposite direction. Thus for example many properties in the p-block show a zigzag rather than a smooth trend along the group. For example, phosphorus and antimony in odd periods of group 15 readily reach the +5 oxidation state, whereas nitrogen, arsenic, and bismuth in even periods prefer to stay at +3. A similar situation holds for the d-block, with lutetium through tungsten atoms being slightly smaller than yttrium through molybdenum atoms respectively.
Thallium and lead atoms are about the same size as indium and tin atoms respectively, but from bismuth to radon the 6p atoms are larger than the analogous 5p atoms. This happens because when atomic nuclei become highly charged, special relativity becomes needed to gauge the effect of the nucleus on the electron cloud. These relativistic effects result in heavy elements increasingly having differing properties compared to their lighter homologues in the periodic table. Spin–orbit interaction splits the p-subshell: one p-orbital is relativistically stabilized and shrunken (it fills in thallium and lead), but the other two (filling in bismuth through radon) are relativistically destabilized and expanded. Relativistic effects also explain why gold is golden and mercury is a liquid at room temperature. They are expected to become very strong in the late seventh period, potentially leading to a collapse of periodicity. Electron configurations are only clearly known until element 108 (hassium), and experimental chemistry beyond 108 has only been done for elements 112 (copernicium) through 115 (moscovium), so the chemical characterization of the heaviest elements remains a topic of current research.
The trend that atomic radii decrease from left to right is also present in ionic radii, though it is more difficult to examine because the most common ions of consecutive elements normally differ in charge. Ions with the same electron configuration decrease in size as their atomic number rises, due to increased attraction from the more positively charged nucleus: thus for example ionic radii decrease in the series Se2−, Br−, Rb+, Sr2+, Y3+, Zr4+, Nb5+, Mo6+, Tc7+. Ions of the same element get smaller as more electrons are removed, because the attraction from the nucleus begins to outweigh the repulsion between electrons that causes electron clouds to expand: thus for example ionic radii decrease in the series V2+, V3+, V4+, V5+.
Ionisation energy
The first ionisation energy of an atom is the energy required to remove an electron from it. This varies with the atomic radius: ionisation energy increases left to right and down to up, because electrons that are closer to the nucleus are held more tightly and are more difficult to remove. Ionisation energy thus is minimized at the first element of each period – hydrogen and the alkali metals – and then generally rises until it reaches the noble gas at the right edge of the period. There are some exceptions to this trend, such as oxygen, where the electron being removed is paired and thus interelectronic repulsion makes it easier to remove than expected.
In the transition series, the outer electrons are preferentially lost even though the inner orbitals are filling. For example, in the 3d series, the 4s electrons are lost first even though the 3d orbitals are being filled. The shielding effect of adding an extra 3d electron approximately compensates the rise in nuclear charge, and therefore the ionisation energies stay mostly constant, though there is a small increase especially at the end of each transition series.
As metal atoms tend to lose electrons in chemical reactions, ionisation energy is generally correlated with chemical reactivity, although there are other factors involved as well.
Electron affinity
The opposite property to ionisation energy is the electron affinity, which is the energy released when adding an electron to the atom. A passing electron will be more readily attracted to an atom if it feels the pull of the nucleus more strongly, and especially if there is an available partially filled outer orbital that can accommodate it. Therefore, electron affinity tends to increase down to up and left to right. The exception is the last column, the noble gases, which have a full shell and have no room for another electron. This gives the halogens in the next-to-last column the highest electron affinities.
Some atoms, like the noble gases, have no electron affinity: they cannot form stable gas-phase anions. (They can form metastable resonances if the incoming electron arrives with enough kinetic energy, but these inevitably and rapidly autodetach: for example, the lifetime of the most long-lived He− level is about 359 microseconds.) The noble gases, having high ionisation energies and no electron affinity, have little inclination towards gaining or losing electrons and are generally unreactive.
Some exceptions to the trends occur: oxygen and fluorine have lower electron affinities than their heavier homologues sulfur and chlorine, because they are small atoms and hence the newly added electron would experience significant repulsion from the already present ones. For the nonmetallic elements, electron affinity likewise somewhat correlates with reactivity, but not perfectly since other factors are involved. For example, fluorine has a lower electron affinity than chlorine (because of extreme interelectronic repulsion for the very small fluorine atom), but is more reactive.
Valence and oxidation states
The valence of an element can be defined either as the number of hydrogen atoms that can combine with it to form a simple binary hydride, or as twice the number of oxygen atoms that can combine with it to form a simple binary oxide (that is, not a peroxide or a superoxide). The valences of the main-group elements are directly related to the group number: the hydrides in the main groups 1–2 and 13–17 follow the formulae MH, MH2, MH3, MH4, MH3, MH2, and finally MH. The highest oxides instead increase in valence, following the formulae M2O, MO, M2O3, MO2, M2O5, MO3, M2O7. Today the notion of valence has been extended by that of the oxidation state, which is the formal charge left on an element when all other elements in a compound have been removed as their ions.
The electron configuration suggests a ready explanation from the number of electrons available for bonding; indeed, the number of valence electrons starts at 1 in group 1, and then increases towards the right side of the periodic table, only resetting at 3 whenever each new block starts. Thus in period 6, Cs–Ba have 1–2 valence electrons; La–Yb have 3–16; Lu–Hg have 3–12; and Tl–Rn have 3–8. However, towards the right side of the d- and f-blocks, the theoretical maximum corresponding to using all valence electrons is not achievable at all; the same situation affects oxygen, fluorine, and the light noble gases up to krypton.
A full explanation requires considering the energy that would be released in forming compounds with different valences rather than simply considering electron configurations alone. For example, magnesium forms Mg2+ rather than Mg+ cations when dissolved in water, because the latter would spontaneously disproportionate into Mg0 and Mg2+ cations. This is because the enthalpy of hydration (surrounding the cation with water molecules) increases in magnitude with the charge and radius of the ion. In Mg+, the outermost orbital (which determines ionic radius) is still 3s, so the hydration enthalpy is small and insufficient to compensate the energy required to remove the electron; but ionizing again to Mg2+ uncovers the core 2p subshell, making the hydration enthalpy large enough to allow magnesium(II) compounds to form. For similar reasons, the common oxidation states of the heavier p-block elements (where the ns electrons become lower in energy than the np) tend to vary by steps of 2, because that is necessary to uncover an inner subshell and decrease the ionic radius (e.g. Tl+ uncovers 6s, and Tl3+ uncovers 5d, so once thallium loses two electrons it tends to lose the third one as well). Analogous arguments based on orbital hybridization can be used for the less electronegative p-block elements.
For transition metals, common oxidation states are nearly always at least +2 for similar reasons (uncovering the next subshell); this holds even for the metals with anomalous dx+1s1 or dx+2s0 configurations (except for silver), because repulsion between d-electrons means that the movement of the second electron from the s- to the d-subshell does not appreciably change its ionisation energy. Because ionizing the transition metals further does not uncover any new inner subshells, their oxidation states tend to vary by steps of 1 instead. The lanthanides and late actinides generally show a stable +3 oxidation state, removing the outer s-electrons and then (usually) one electron from the (n−2)f-orbitals, that are similar in energy to ns. The common and maximum oxidation states of the d- and f-block elements tend to depend on the ionisation energies. As the energy difference between the (n−1)d and ns orbitals rises along each transition series, it becomes less energetically favourable to ionize further electrons. Thus, the early transition metal groups tend to prefer higher oxidation states, but the +2 oxidation state becomes more stable for the late transition metal groups. The highest formal oxidation state thus increases from +3 at the beginning of each d-block row, to +7 or +8 in the middle (e.g. OsO4), and then decrease to +2 at the end. The lanthanides and late actinides usually have high fourth ionisation energies and hence rarely surpass the +3 oxidation state, whereas early actinides have low fourth ionisation energies and so for example neptunium and plutonium can reach +7. The very last actinides go further than the lanthanides towards low oxidation states: mendelevium is more easily reduced to the +2 state than thulium or even europium (the lanthanide with the most stable +2 state, on account of its half-filled f-shell), and nobelium outright favours +2 over +3, in contrast to ytterbium.
As elements in the same group share the same valence configurations, they usually exhibit similar chemical behaviour. For example, the alkali metals in the first group all have one valence electron, and form a very homogeneous class of elements: they are all soft and reactive metals. However, there are many factors involved, and groups can often be rather heterogeneous. For instance, hydrogen also has one valence electron and is in the same group as the alkali metals, but its chemical behaviour is quite different. The stable elements of group 14 comprise a nonmetal (carbon), two semiconductors (silicon and germanium), and two metals (tin and lead); they are nonetheless united by having four valence electrons. This often leads to similarities in maximum and minimum oxidation states (e.g. sulfur and selenium in group 16 both have maximum oxidation state +6, as in SO3 and SeO3, and minimum oxidation state −2, as in sulfides and selenides); but not always (e.g. oxygen is not known to form oxidation state +6, despite being in the same group as sulfur and selenium).
Electronegativity
Another important property of elements is their electronegativity. Atoms can form covalent bonds to each other by sharing electrons in pairs, creating an overlap of valence orbitals. The degree to which each atom attracts the shared electron pair depends on the atom's electronegativity – the tendency of an atom towards gaining or losing electrons. The more electronegative atom will tend to attract the electron pair more, and the less electronegative (or more electropositive) one will attract it less. In extreme cases, the electron can be thought of as having been passed completely from the more electropositive atom to the more electronegative one, though this is a simplification. The bond then binds two ions, one positive (having given up the electron) and one negative (having accepted it), and is termed an ionic bond.
Electronegativity depends on how strongly the nucleus can attract an electron pair, and so it exhibits a similar variation to the other properties already discussed: electronegativity tends to fall going up to down, and rise going left to right. The alkali and alkaline earth metals are among the most electropositive elements, while the chalcogens, halogens, and noble gases are among the most electronegative ones.
Electronegativity is generally measured on the Pauling scale, on which the most electronegative reactive atom (fluorine) is given electronegativity 4.0, and the least electronegative atom (caesium) is given electronegativity 0.79. In fact neon is the most electronegative element, but the Pauling scale cannot measure its electronegativity because it does not form covalent bonds with most elements.
An element's electronegativity varies with the identity and number of the atoms it is bonded to, as well as how many electrons it has already lost: an atom becomes more electronegative when it has lost more electrons. This sometimes makes a large difference: lead in the +2 oxidation state has electronegativity 1.87 on the Pauling scale, while lead in the +4 oxidation state has electronegativity 2.33.
Metallicity
A simple substance is a substance formed from atoms of one chemical element. The simple substances of the more electronegative atoms tend to share electrons (form covalent bonds) with each other. They form either small molecules (like hydrogen or oxygen, whose atoms bond in pairs) or giant structures stretching indefinitely (like carbon or silicon). The noble gases simply stay as single atoms, as they already have a full shell. Substances composed of discrete molecules or single atoms are held together by weaker attractive forces between the molecules, such as the London dispersion force: as electrons move within the molecules, they create momentary imbalances of electrical charge, which induce similar imbalances on nearby molecules and create synchronized movements of electrons across many neighbouring molecules.
The more electropositive atoms, however, tend to instead lose electrons, creating a "sea" of electrons engulfing cations. The outer orbitals of one atom overlap to share electrons with all its neighbours, creating a giant structure of molecular orbitals extending over all the atoms. This negatively charged "sea" pulls on all the ions and keeps them together in a metallic bond. Elements forming such bonds are often called metals; those which do not are often called nonmetals. Some elements can form multiple simple substances with different structures: these are called allotropes. For example, diamond and graphite are two allotropes of carbon.
The metallicity of an element can be predicted from electronic properties. When atomic orbitals overlap during metallic or covalent bonding, they create both bonding and antibonding molecular orbitals of equal capacity, with the antibonding orbitals of higher energy. Net bonding character occurs when there are more electrons in the bonding orbitals than there are in the antibonding orbitals. Metallic bonding is thus possible when the number of electrons delocalized by each atom is less than twice the number of orbitals contributing to the overlap. This is the situation for elements in groups 1 through 13; they also have too few valence electrons to form giant covalent structures where all atoms take equivalent positions, and so almost all of them metallise. The exceptions are hydrogen and boron, which have too high an ionisation energy. Hydrogen thus forms a covalent H2 molecule, and boron forms a giant covalent structure based on icosahedral B12 clusters. In a metal, the bonding and antibonding orbitals have overlapping energies, creating a single band that electrons can freely flow through, allowing for electrical conduction.
In group 14, both metallic and covalent bonding become possible. In a diamond crystal, covalent bonds between carbon atoms are strong, because they have a small atomic radius and thus the nucleus has more of a hold on the electrons. Therefore, the bonding orbitals that result are much lower in energy than the antibonding orbitals, and there is no overlap, so electrical conduction becomes impossible: carbon is a nonmetal. However, covalent bonding becomes weaker for larger atoms and the energy gap between the bonding and antibonding orbitals decreases. Therefore, silicon and germanium have smaller band gaps and are semiconductors at ambient conditions: electrons can cross the gap when thermally excited. (Boron is also a semiconductor at ambient conditions.) The band gap disappears in tin, so that tin and lead become metals. As the temperature rises, all nonmetals develop some semiconducting properties, to a greater or lesser extent depending on the size of the band gap. Thus metals and nonmetals may be distinguished by the temperature dependence of their electrical conductivity: a metal's conductivity lowers as temperature rises (because thermal motion makes it more difficult for the electrons to flow freely), whereas a nonmetal's conductivity rises (as more electrons may be excited to cross the gap).
Elements in groups 15 through 17 have too many electrons to form giant covalent molecules that stretch in all three dimensions. For the lighter elements, the bonds in small diatomic molecules are so strong that a condensed phase is disfavoured: thus nitrogen (N2), oxygen (O2), white phosphorus and yellow arsenic (P4 and As4), sulfur and red selenium (S8 and Se8), and the stable halogens (F2, Cl2, Br2, and I2) readily form covalent molecules with few atoms. The heavier ones tend to form long chains (e.g. red phosphorus, grey selenium, tellurium) or layered structures (e.g. carbon as graphite, black phosphorus, grey arsenic, antimony, bismuth) that only extend in one or two rather than three dimensions. Both kinds of structures can be found as allotropes of phosphorus, arsenic, and selenium, although the long-chained allotropes are more stable in all three. As these structures do not use all their orbitals for bonding, they end up with bonding, nonbonding, and antibonding bands in order of increasing energy. Similarly to group 14, the band gaps shrink for the heavier elements and free movement of electrons between the chains or layers becomes possible. Thus for example black phosphorus, black arsenic, grey selenium, tellurium, and iodine are semiconductors; grey arsenic, antimony, and bismuth are semimetals (exhibiting quasi-metallic conduction, with a very small band overlap); and polonium and probably astatine are true metals. Finally, the natural group 18 elements all stay as individual atoms.
The dividing line between metals and nonmetals is roughly diagonal from top left to bottom right, with the transition series appearing to the left of this diagonal (as they have many available orbitals for overlap). This is expected, as metallicity tends to be correlated with electropositivity and the willingness to lose electrons, which increases right to left and up to down. Thus the metals greatly outnumber the nonmetals. Elements near the borderline are difficult to classify: they tend to have properties that are intermediate between those of metals and nonmetals, and may have some properties characteristic of both. They are often termed semimetals or metalloids. The term "semimetal" used in this sense should not be confused with its strict physical sense having to do with band structure: bismuth is physically a semimetal, but is generally considered a metal by chemists.
The following table considers the most stable allotropes at standard conditions. The elements coloured yellow form simple substances that are well-characterised by metallic bonding. Elements coloured light blue form giant network covalent structures, whereas those coloured dark blue form small covalently bonded molecules that are held together by weaker van der Waals forces. The noble gases are coloured in violet: their molecules are single atoms and no covalent bonding occurs. Greyed-out cells are for elements which have not been prepared in sufficient quantities for their most stable allotropes to have been characterized in this way. Theoretical considerations and current experimental evidence suggest that all of those elements would metallise if they could form condensed phases, except perhaps for oganesson.
Generally, metals are shiny and dense. They usually have high melting and boiling points due to the strength of the metallic bond, and are often malleable and ductile (easily stretched and shaped) because the atoms can move relative to each other without breaking the metallic bond. They conduct electricity because their electrons are free to move in all three dimensions. Similarly, they conduct heat, which is transferred by the electrons as extra kinetic energy: they move faster. These properties persist in the liquid state, as although the crystal structure is destroyed on melting, the atoms still touch and the metallic bond persists, though it is weakened. Metals tend to be reactive towards nonmetals. Some exceptions can be found to these generalizations: for example, beryllium, chromium, manganese, antimony, bismuth, and uranium are brittle (not an exhaustive list); chromium is extremely hard; gallium, rubidium, caesium, and mercury are liquid at or close to room temperature; and noble metals such as gold are chemically very inert.
Nonmetals exhibit different properties. Those forming giant covalent crystals exhibit high melting and boiling points, as it takes considerable energy to overcome the strong covalent bonds. Those forming discrete molecules are held together mostly by dispersion forces, which are more easily overcome; thus they tend to have lower melting and boiling points, and many are liquids or gases at room temperature. Nonmetals are often dull-looking. They tend to be reactive towards metals, except for the noble gases, which are inert towards most substances. They are brittle when solid as their atoms are held tightly in place. They are less dense and conduct electricity poorly, because there are no mobile electrons. Near the borderline, band gaps are small and thus many elements in that region are semiconductors, such as silicon, germanium, and tellurium. Selenium has both a semiconducting grey allotrope and an insulating red allotrope; arsenic has a metallic grey allotrope, a semiconducting black allotrope, and an insulating yellow allotrope (though the last is unstable at ambient conditions). Again there are exceptions; for example, diamond has the highest thermal conductivity of all known materials, greater than any metal.
It is common to designate a class of metalloids straddling the boundary between metals and nonmetals, as elements in that region are intermediate in both physical and chemical properties. However, no consensus exists in the literature for precisely which elements should be so designated. When such a category is used, silicon, germanium, arsenic, and tellurium are almost always included, and boron and antimony usually are; but most sources include other elements as well, without agreement on which extra elements should be added, and some others subtract from this list instead. For example, unlike all the other elements generally considered metalloids or nonmetals, antimony's only stable form has metallic conductivity. Moreover, the element resembles bismuth and, more generally, the other p-block metals in its physical and chemical behaviour. On this basis some authors have argued that it is better classified as a metal than as a metalloid. On the other hand, selenium has some semiconducting properties in its most stable form (though it also has insulating allotropes) and it has been argued that it should be considered a metalloid – though this situation also holds for phosphorus, which is a much rarer inclusion among the metalloids.
Further manifestations of periodicity
There are some other relationships throughout the periodic table between elements that are not in the same group, such as the diagonal relationships between elements that are diagonally adjacent (e.g. lithium and magnesium). Some similarities can also be found between the main groups and the transition metal groups, or between the early actinides and early transition metals, when the elements have the same number of valence electrons. Thus uranium somewhat resembles chromium and tungsten in group 6, as all three have six valence electrons. Relationships between elements with the same number of valence electrons but different types of valence orbital have been called secondary or isodonor relationships: they usually have the same maximum oxidation states, but not the same minimum oxidation states. For example, chlorine and manganese both have +7 as their maximum oxidation state (e.g. Cl2O7 and Mn2O7), but their respective minimum oxidation states are −1 (e.g. HCl) and −3 (K2[Mn(CO)4]). Elements with the same number of valence vacancies but different numbers of valence electrons are related by a tertiary or isoacceptor relationship: they usually have similar minimum but not maximum oxidation states. For example, hydrogen and chlorine both have −1 as their minimum oxidation state (in hydrides and chlorides), but hydrogen's maximum oxidation state is +1 (e.g. H2O) while chlorine's is +7.
Many other physical properties of the elements exhibit periodic variation in accordance with the periodic law, such as melting points, boiling points, heats of fusion, heats of vaporization, atomisation energy, and so on. Similar periodic variations appear for the compounds of the elements, which can be observed by comparing hydrides, oxides, sulfides, halides, and so on. Chemical properties are more difficult to describe quantitatively, but likewise exhibit their own periodicities. Examples include the variation in the acidic and basic properties of the elements and their compounds, the stabilities of compounds, and methods of isolating the elements. Periodicity is and has been used very widely to predict the properties of unknown new elements and new compounds, and is central to modern chemistry.
Classification of elements
Many terms have been used in the literature to describe sets of elements that behave similarly. The group names alkali metal, alkaline earth metal, triel, tetrel, pnictogen, chalcogen, halogen, and noble gas are acknowledged by IUPAC; the other groups can be referred to by their number, or by their first element (e.g., group 6 is the chromium group). Some divide the p-block elements from groups 13 to 16 by metallicity, although there is neither an IUPAC definition nor a precise consensus on exactly which elements should be considered metals, nonmetals, or semi-metals (sometimes called metalloids). Neither is there a consensus on what the metals succeeding the transition metals ought to be called, with post-transition metal and poor metal being among the possibilities having been used. Some advanced monographs exclude the elements of group 12 from the transition metals on the grounds of their sometimes quite different chemical properties, but this is not a universal practice and IUPAC does not presently mention it as allowable in its Principles of Chemical Nomenclature.
The lanthanides are considered to be the elements La–Lu, which are all very similar to each other: historically they included only Ce–Lu, but lanthanum became included by common usage. The rare earth elements (or rare earth metals) add scandium and yttrium to the lanthanides. Analogously, the actinides are considered to be the elements Ac–Lr (historically Th–Lr), although variation of properties in this set is much greater than within the lanthanides. IUPAC recommends the names lanthanoids and actinoids to avoid ambiguity, as the -ide suffix typically denotes a negative ion; however lanthanides and actinides remain common. With the increasing recognition of lutetium and lawrencium as d-block elements, some authors began to define the lanthanides as La–Yb and the actinides as Ac–No, matching the f-block. The transactinides or superheavy elements are the short-lived elements beyond the actinides, starting at lawrencium or rutherfordium (depending on where the actinides are taken to end).
Many more categorizations exist and are used according to certain disciplines. In astrophysics, a metal is defined as any element with atomic number greater than 2, i.e. anything except hydrogen and helium. The term "semimetal" has a different definition in physics than it does in chemistry: bismuth is a semimetal by physical definitions, but chemists generally consider it a metal. A few terms are widely used, but without any very formal definition, such as "heavy metal", which has been given such a wide range of definitions that it has been criticized as "effectively meaningless".
The scope of terms varies significantly between authors. For example, according to IUPAC, the noble gases extend to include the whole group, including the very radioactive superheavy element oganesson. However, among those who specialize in the superheavy elements, this is not often done: in this case "noble gas" is typically taken to imply the unreactive behaviour of the lighter elements of the group. Since calculations generally predict that oganesson should not be particularly inert due to relativistic effects, and may not even be a gas at room temperature if it could be produced in bulk, its status as a noble gas is often questioned in this context. Furthermore, national variations are sometimes encountered: in Japan, alkaline earth metals often do not include beryllium and magnesium as their behaviour is different from the heavier group 2 metals.
History
Early history
In 1817, German physicist Johann Wolfgang Döbereiner began to formulate one of the earliest attempts to classify the elements. In 1829, he found that he could form some of the elements into groups of three, with the members of each group having related properties. He termed these groups triads. Chlorine, bromine, and iodine formed a triad; as did calcium, strontium, and barium; lithium, sodium, and potassium; and sulfur, selenium, and tellurium. Today, all these triads form part of modern-day groups: the halogens, alkaline earth metals, alkali metals, and chalcogens. Various chemists continued his work and were able to identify more and more relationships between small groups of elements. However, they could not build one scheme that encompassed them all.
John Newlands published a letter in the Chemical News in February 1863 on the periodicity among the chemical elements. In 1864 Newlands published an article in the Chemical News showing that if the elements are arranged in the order of their atomic weights, those having consecutive numbers frequently either belong to the same group or occupy similar positions in different groups, and he pointed out that each eighth element starting from a given one is in this arrangement a kind of repetition of the first, like the eighth note of an octave in music (The Law of Octaves). However, Newlands's formulation only worked well for the main-group elements, and encountered serious problems with the others.
German chemist Lothar Meyer noted the sequences of similar chemical and physical properties repeated at periodic intervals. According to him, if the atomic weights were plotted as ordinates (i.e. vertically) and the atomic volumes as abscissas (i.e. horizontally)—the curve obtained a series of maximums and minimums—the most electropositive elements would appear at the peaks of the curve in the order of their atomic weights. In 1864, a book of his was published; it contained an early version of the periodic table containing 28 elements, and classified elements into six families by their valence—for the first time, elements had been grouped according to their valence. Works on organizing the elements by atomic weight had until then been stymied by inaccurate measurements of the atomic weights. In 1868, he revised his table, but this revision was published as a draft only after his death.
Mendeleev
The definitive breakthrough came from the Russian chemist Dmitri Mendeleev. Although other chemists (including Meyer) had found some other versions of the periodic system at about the same time, Mendeleev was the most dedicated to developing and defending his system, and it was his system that most affected the scientific community. On 17 February 1869 (1 March 1869 in the Gregorian calendar), Mendeleev began arranging the elements and comparing them by their atomic weights. He began with a few elements, and over the course of the day his system grew until it encompassed most of the known elements. After he found a consistent arrangement, his printed table appeared in May 1869 in the journal of the Russian Chemical Society. When elements did not appear to fit in the system, he boldly predicted that either valencies or atomic weights had been measured incorrectly, or that there was a missing element yet to be discovered. In 1871, Mendeleev published a long article, including an updated form of his table, that made his predictions for unknown elements explicit. Mendeleev predicted the properties of three of these unknown elements in detail: as they would be missing heavier homologues of boron, aluminium, and silicon, he named them eka-boron, eka-aluminium, and eka-silicon ("eka" being Sanskrit for "one").
In 1875, the French chemist Paul-Émile Lecoq de Boisbaudran, working without knowledge of Mendeleev's prediction, discovered a new element in a sample of the mineral sphalerite, and named it gallium. He isolated the element and began determining its properties. Mendeleev, reading de Boisbaudran's publication, sent a letter claiming that gallium was his predicted eka-aluminium. Although Lecoq de Boisbaudran was initially sceptical, and suspected that Mendeleev was trying to take credit for his discovery, he later admitted that Mendeleev was correct. In 1879, the Swedish chemist Lars Fredrik Nilson discovered a new element, which he named scandium: it turned out to be eka-boron. Eka-silicon was found in 1886 by German chemist Clemens Winkler, who named it germanium. The properties of gallium, scandium, and germanium matched what Mendeleev had predicted. In 1889, Mendeleev noted at the Faraday Lecture to the Royal Institution in London that he had not expected to live long enough "to mention their discovery to the Chemical Society of Great Britain as a confirmation of the exactitude and generality of the periodic law". Even the discovery of the noble gases at the close of the 19th century, which Mendeleev had not predicted, fitted neatly into his scheme as an eighth main group.
Mendeleev nevertheless had some trouble fitting the known lanthanides into his scheme, as they did not exhibit the periodic change in valencies that the other elements did. After much investigation, the Czech chemist Bohuslav Brauner suggested in 1902 that the lanthanides could all be placed together in one group on the periodic table. He named this the "asteroid hypothesis" as an astronomical analogy: just as there is an asteroid belt instead of a single planet between Mars and Jupiter, so the place below yttrium was thought to be occupied by all the lanthanides instead of just one element.
Atomic number
After the internal structure of the atom was probed, amateur Dutch physicist Antonius van den Broek proposed in 1913 that the nuclear charge determined the placement of elements in the periodic table. The New Zealand physicist Ernest Rutherford coined the word "atomic number" for this nuclear charge. In van den Broek's published article he illustrated the first electronic periodic table showing the elements arranged according to the number of their electrons. Rutherford confirmed in his 1914 paper that Bohr had accepted the view of van den Broek.
The same year, English physicist Henry Moseley using X-ray spectroscopy confirmed van den Broek's proposal experimentally. Moseley determined the value of the nuclear charge of each element from aluminium to gold and showed that Mendeleev's ordering actually places the elements in sequential order by nuclear charge. Nuclear charge is identical to proton count and determines the value of the atomic number (Z) of each element. Using atomic number gives a definitive, integer-based sequence for the elements. Moseley's research immediately resolved discrepancies between atomic weight and chemical properties; these were cases such as tellurium and iodine, where atomic number increases but atomic weight decreases. Although Moseley was soon killed in World War I, the Swedish physicist Manne Siegbahn continued his work up to uranium, and established that it was the element with the highest atomic number then known (92). Based on Moseley and Siegbahn's research, it was also known which atomic numbers corresponded to missing elements yet to be found: 43, 61, 72, 75, 85, and 87. (Element 75 had in fact already been found by Japanese chemist Masataka Ogawa in 1908 and named nipponium, but he mistakenly assigned it as element 43 instead of 75 and so his discovery was not generally recognized until later. The contemporarily accepted discovery of element 75 came in 1925, when Walter Noddack, Ida Tacke, and Otto Berg independently rediscovered it and gave it its present name, rhenium.)
The dawn of atomic physics also clarified the situation of isotopes. In the decay chains of the primordial radioactive elements thorium and uranium, it soon became evident that there were many apparent new elements that had different atomic weights but exactly the same chemical properties. In 1913, Frederick Soddy coined the term "isotope" to describe this situation, and considered isotopes to merely be different forms of the same chemical element. This furthermore clarified discrepancies such as tellurium and iodine: tellurium's natural isotopic composition is weighted towards heavier isotopes than iodine's, but tellurium has a lower atomic number.
Electron shells
The Danish physicist Niels Bohr applied Max Planck's idea of quantization to the atom. He concluded that the energy levels of electrons were quantised: only a discrete set of stable energy states were allowed. Bohr then attempted to understand periodicity through electron configurations, surmising in 1913 that the inner electrons should be responsible for the chemical properties of the element. In 1913, he produced the first electronic periodic table based on a quantum atom.
Bohr called his electron shells "rings" in 1913: atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing, "We see, further, that a ring of electrons cannot rotate in a single ring round a nucleus of charge ne unless < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8." However, in larger atoms the innermost shell would contain eight electrons: "on the other hand, the periodic system of the elements strongly suggests that already in neon = 10 an inner ring of eight electrons will occur." His proposed electron configurations for the atoms (shown to the right) mostly do not accord with those now known. They were improved further after the work of Arnold Sommerfeld and Edmund Stoner discovered more quantum numbers.
The first one to systematically expand and correct the chemical potentials of Bohr's atomic theory was Walther Kossel in 1914 and in 1916. Kossel explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated.Translated in Helge Kragh, Aarhus, Lars Vegard, Atomic Structure, and the Periodic System, Bull. Hist. Chem., VOLUME 37, Number 1 (2012), p.43.
In a 1919 paper, Irving Langmuir postulated the existence of "cells" which we now call orbitals, which could each only contain two electrons each, and these were arranged in "equidistant layers" which we now call shells. He made an exception for the first shell to only contain two electrons. The chemist Charles Rugeley Bury suggested in 1921 that eight and eighteen electrons in a shell form stable configurations. Bury proposed that the electron configurations in transitional elements depended upon the valence electrons in their outer shell. He introduced the word transition to describe the elements now known as transition metals or transition elements. Bohr's theory was vindicated by the discovery of element 72: Georges Urbain claimed to have discovered it as the rare earth element celtium, but Bury and Bohr had predicted that element 72 could not be a rare earth element and had to be a homologue of zirconium. Dirk Coster and Georg von Hevesy searched for the element in zirconium ores and found element 72, which they named hafnium after Bohr's hometown of Copenhagen (Hafnia in Latin). Urbain's celtium proved to be simply purified lutetium (element 71). Hafnium and rhenium thus became the last stable elements to be discovered.
Prompted by Bohr, Wolfgang Pauli took up the problem of electron configurations in 1923. Pauli extended Bohr's scheme to use four quantum numbers, and formulated his exclusion principle which stated that no two electrons could have the same four quantum numbers. This explained the lengths of the periods in the periodic table (2, 8, 18, and 32), which corresponded to the number of electrons that each shell could occupy. In 1925, Friedrich Hund arrived at configurations close to the modern ones. As a result of these advances, periodicity became based on the number of chemically active or valence electrons rather than by the valences of the elements. The Aufbau principle that describes the electron configurations of the elements was first empirically observed by Erwin Madelung in 1926, though the first to publish it was Vladimir Karapetoff in 1930. In 1961, Vsevolod Klechkovsky derived the first part of the Madelung rule (that orbitals fill in order of increasing n + ℓ) from the Thomas–Fermi model; the complete rule was derived from a similar potential in 1971 by Yury N. Demkov and Valentin N. Ostrovsky.
The quantum theory clarified the transition metals and lanthanides as forming their own separate groups, transitional between the main groups, although some chemists had already proposed tables showing them this way before then: the English chemist Henry Bassett did so in 1892, the Danish chemist Julius Thomsen in 1895, and the Swiss chemist Alfred Werner in 1905. Bohr used Thomsen's form in his 1922 Nobel Lecture; Werner's form is very similar to the modern 32-column form. In particular, this supplanted Brauner's asteroidal hypothesis.
The exact position of the lanthanides, and thus the composition of group 3, remained under dispute for decades longer because their electron configurations were initially measured incorrectly. On chemical grounds Bassett, Werner, and Bury grouped scandium and yttrium with lutetium rather than lanthanum (the former two left an empty space below yttrium as lutetium had not yet been discovered). Hund assumed in 1927 that all the lanthanide atoms had configuration [Xe]4f0−145d16s2, on account of their prevailing trivalency. It is now known that the relationship between chemistry and electron configuration is more complicated than that. Early spectroscopic evidence seemed to confirm these configurations, and thus the periodic table was structured to have group 3 as scandium, yttrium, lanthanum, and actinium, with fourteen f-elements breaking up the d-block between lanthanum and hafnium. But it was later discovered that this is only true for four of the fifteen lanthanides (lanthanum, cerium, gadolinium, and lutetium), and that the other lanthanide atoms do not have a d-electron. In particular, ytterbium completes the 4f shell and thus Soviet physicists Lev Landau and Evgeny Lifshitz noted in 1948 that lutetium is correctly regarded as a d-block rather than an f-block element; that bulk lanthanum is an f-metal was first suggested by Jun Kondō in 1963, on the grounds of its low-temperature superconductivity. This clarified the importance of looking at low-lying excited states of atoms that can play a role in chemical environments when classifying elements by block and positioning them on the table. Many authors subsequently rediscovered this correction based on physical, chemical, and electronic concerns and applied it to all the relevant elements, thus making group 3 contain scandium, yttrium, lutetium, and lawrencium and having lanthanum through ytterbium and actinium through nobelium as the f-block rows: this corrected version achieves consistency with the Madelung rule and vindicates Bassett, Werner, and Bury's initial chemical placement.
In 1988, IUPAC released a report supporting this composition of group 3, a decision that was reaffirmed in 2021. Variation can still be found in textbooks on the composition of group 3, and some argumentation against this format is still published today, but chemists and physicists who have considered the matter largely agree on group 3 containing scandium, yttrium, lutetium, and lawrencium and challenge the counterarguments as being inconsistent.
Synthetic elements
By 1936, the pool of missing elements from hydrogen to uranium had shrunk to four: elements 43, 61, 85, and 87 remained missing. Element 43 eventually became the first element to be synthesized artificially via nuclear reactions rather than discovered in nature. It was discovered in 1937 by Italian chemists Emilio Segrè and Carlo Perrier, who named their discovery technetium, after the Greek word for "artificial". Elements 61 (promethium) and 85 (astatine) were likewise produced artificially in 1945 and 1940 respectively; element 87 (francium) became the last element to be discovered in nature, by French chemist Marguerite Perey in 1939. The elements beyond uranium were likewise discovered artificially, starting with Edwin McMillan and Philip Abelson's 1940 discovery of neptunium (via bombardment of uranium with neutrons). Glenn T. Seaborg and his team at the Lawrence Berkeley National Laboratory (LBNL) continued discovering transuranium elements, starting with plutonium in 1941, and discovered that contrary to previous thinking, the elements from actinium onwards were congeners of the lanthanides rather than transition metals. Bassett (1892), Werner (1905), and the French engineer Charles Janet (1928) had previously suggested this, but their ideas did not then receive general acceptance. Seaborg thus called them the actinides. Elements up to 101 (named mendelevium in honour of Mendeleev) were synthesized up to 1955, either through neutron or alpha-particle irradiation, or in nuclear explosions in the cases of 99 (einsteinium) and 100 (fermium).
A significant controversy arose with elements 102 through 106 in the 1960s and 1970s, as competition arose between the LBNL team (now led by Albert Ghiorso) and a team of Soviet scientists at the Joint Institute for Nuclear Research (JINR) led by Georgy Flyorov. Each team claimed discovery, and in some cases each proposed their own name for the element, creating an element naming controversy that lasted decades. These elements were made by bombardment of actinides with light ions. IUPAC at first adopted a hands-off approach, preferring to wait and see if a consensus would be forthcoming. But as it was also the height of the Cold War, it became clear that this would not happen. As such, IUPAC and the International Union of Pure and Applied Physics (IUPAP) created a Transfermium Working Group (TWG, fermium being element 100) in 1985 to set out criteria for discovery, which were published in 1991. After some further controversy, these elements received their final names in 1997, including seaborgium (106) in honour of Seaborg.
The TWG's criteria were used to arbitrate later element discovery claims from LBNL and JINR, as well as from research institutes in Germany (GSI) and Japan (Riken). Currently, consideration of discovery claims is performed by a IUPAC/IUPAP Joint Working Party. After priority was assigned, the elements were officially added to the periodic table, and the discoverers were invited to propose their names. By 2016, this had occurred for all elements up to 118, therefore completing the periodic table's first seven rows. The discoveries of elements beyond 106 were made possible by techniques devised by Yuri Oganessian at the JINR: cold fusion (bombardment of lead and bismuth by heavy ions) made possible the 1981–2004 discoveries of elements 107 through 112 at GSI and 113 at Riken, and he led the JINR team (in collaboration with American scientists) to discover elements 114 through 118 using hot fusion (bombardment of actinides by calcium ions) in 1998–2010. The heaviest known element, oganesson (118), is named in Oganessian's honour. Element 114 is named flerovium in honour of his predecessor and mentor Flyorov.
In celebration of the periodic table's 150th anniversary, the United Nations declared the year 2019 as the International Year of the Periodic Table, celebrating "one of the most significant achievements in science". The discovery criteria set down by the TWG were updated in 2020 in response to experimental and theoretical progress that had not been foreseen in 1991. Today, the periodic table is among the most recognisable icons of chemistry. IUPAC is involved today with many processes relating to the periodic table: the recognition and naming of new elements, recommending group numbers and collective names, and the updating of atomic weights.
Future extension beyond the seventh period
The most recently named elements – nihonium (113), moscovium (115), tennessine (117), and oganesson (118) – completed the seventh row of the periodic table. Future elements would have to begin an eighth row. These elements may be referred to either by their atomic numbers (e.g. "element 164"), or by the IUPAC systematic element names adopted in 1978, which directly relate to the atomic numbers (e.g. "unhexquadium" for element 164, derived from Latin unus "one", Greek hexa "six", Latin quadra "four", and the traditional -ium suffix for metallic elements). All attempts to synthesize such elements have failed so far. An attempt to make element 119 has been ongoing since 2018 at the Riken research institute in Japan. The LBNL in the United States, the JINR in Russia, and the Heavy Ion Research Facility in Lanzhou (HIRFL) in China also plan to make their own attempts at synthesizing the first few period 8 elements.
If the eighth period followed the pattern set by the earlier periods, then it would contain fifty elements, filling the 8s, , 6f, 7d, and finally 8p subshells in that order. But by this point, relativistic effects should result in significant deviations from the Madelung rule. Various different models have been suggested for the configurations of eighth-period elements, as well as how to show the results in a periodic table. All agree that the eighth period should begin like the previous ones with two 8s elements, 119 and 120. However, after that the massive energetic overlaps between the , 6f, 7d, and 8p subshells means that they all begin to fill together, and it is not clear how to separate out specific and 6f series. Elements 121 through 156 thus do not fit well as chemical analogues of any previous group in the earlier parts of the table, although they have sometimes been placed as , 6f, and other series to formally reflect their electron configurations. Eric Scerri has raised the question of whether an extended periodic table should take into account the failure of the Madelung rule in this region, or if such exceptions should be ignored. The shell structure may also be fairly formal at this point: already the electron distribution in an oganesson atom is expected to be rather uniform, with no discernible shell structure.
The situation from elements 157 to 172 should return to normalcy and be more reminiscent of the earlier rows. The heavy p-shells are split by the spin–orbit interaction: one p-orbital (p1/2) is more stabilized, and the other two (p3/2) are destabilized. (Such shifts in the quantum numbers happen for all types of shells, but it makes the biggest difference to the order for the p-shells.) It is likely that by element 157, the filled 8s and 8p1/2 shells with four electrons in total have sunk into the core. Beyond the core, the next orbitals are 7d and 9s at similar energies, followed by 9p1/2 and 8p3/2 at similar energies, and then a large gap. Thus, the 9s and 9p1/2 orbitals in essence replace the 8s and 8p1/2 ones, making elements 157–172 probably chemically analogous to groups 3–18: for example, element 164 would appear two places below lead in group 14 under the usual pattern, but is calculated to be very analogous to palladium in group 10 instead. Thus, it takes fifty-four elements rather than fifty to reach the next noble element after 118. However, while these conclusions about elements 157 through 172's chemistry are generally agreed by models, there is disagreement on whether the periodic table should be drawn to reflect chemical analogies, or if it should reflect likely formal electron configurations, which should be quite different from earlier periods and are not agreed between sources. Discussion about the format of the eighth row thus continues.
Beyond element 172, calculation is complicated by the 1s electron energy level becoming imaginary. Such a situation does have a physical interpretation and does not in itself pose an electronic limit to the periodic table, but the correct way to incorporate such states into multi-electron calculations is still an open question needing to be solved to calculate the periodic table's structure beyond this point.
Nuclear stability will likely prove a decisive factor constraining the number of possible elements. It depends on the balance between the electric repulsion between protons and the strong force binding protons and neutrons together. Protons and neutrons are arranged in shells, just like electrons, and so a closed shell can significantly increase stability: the known superheavy nuclei exist because of such a shell closure, probably at around 114–126 protons and 184 neutrons. They are probably close to a predicted island of stability, where superheavy nuclides should be more long-lived than expected: predictions for the longest-lived nuclides on the island range from microseconds to millions of years. It should nonetheless be noted that these are essentially extrapolations into an unknown part of the chart of nuclides, and systematic model uncertainties need to be taken into account.
As the closed shells are passed, the stabilizing effect should vanish. Thus, superheavy nuclides with more than 184 neutrons are expected to have much shorter lifetimes, spontaneously fissioning within 10−15 seconds. If this is so, then it would not make sense to consider them chemical elements: [IUPAC/IUPAP theorizes and recommends] an element to exist only if the nucleus lives longer than 10−14 seconds, the time needed for it to gather an electron cloud. Nonetheless, theoretical estimates of half-lives are very model-dependent, ranging over many orders of magnitude. The extreme repulsion between protons is predicted to result in exotic nuclear topologies, with bubbles, rings, and tori expected: this further complicates extrapolation. It is not clear if any further-out shell closures exist, due to an expected smearing out of distinct nuclear shells (as is already expected for the electron shells at oganesson). Furthermore, even if later shell closures exist, it is not clear if they would allow such heavy elements to exist. As such, it may be that the periodic table practically ends around element 120, as elements become too short-lived to observe, and then too short-lived to have chemistry; the era of discovering new elements would thus be close to its end. If another proton shell closure beyond 126 does exist, then it probably occurs around 164; thus the region where periodicity fails more or less matches the region of instability between the shell closures.
Alternatively, quark matter may become stable at high mass numbers, in which the nucleus is composed of freely flowing up and down quarks instead of binding them into protons and neutrons; this would create a continent of stability instead of an island. Other effects may come into play: for example, in very heavy elements the 1s electrons are likely to spend a significant amount of time so close to the nucleus that they are actually inside it, which would make them vulnerable to electron capture.
Even if eighth-row elements can exist, producing them is likely to be difficult, and it should become even more difficult as atomic number rises. Although the 8s elements 119 and 120 are expected to be reachable with present means, the elements beyond that are expected to require new technology, if they can be produced at all. Experimentally characterizing these elements chemically would also pose a great challenge.
Alternative periodic tables
The periodic law may be represented in multiple ways, of which the standard periodic table is only one. Within 100 years of the appearance of Mendeleev's table in 1869, Edward G. Mazurs had collected an estimated 700 different published versions of the periodic table. Many forms retain the rectangular structure, including Charles Janet's left-step periodic table (pictured below), and the modernised form of Mendeleev's original 8-column layout that is still common in Russia. Other periodic table formats have been shaped much more exotically, such as spirals (Otto Theodor Benfey's pictured to the right), circles and triangles.
Alternative periodic tables are often developed to highlight or emphasize chemical or physical properties of the elements that are not as apparent in traditional periodic tables, with different ones skewed more towards emphasizing chemistry or physics at either end. The standard form, which remains by far the most common, is somewhere in the middle.
The many different forms of the periodic table have prompted the questions of whether there is an optimal or definitive form of the periodic table, and if so, what it might be. There are no current consensus answers to either question. Janet's left-step table is being increasingly discussed as a candidate for being the optimal or most fundamental form; Scerri has written in support of it, as it clarifies helium's nature as an s-block element, increases regularity by having all period lengths repeated, faithfully follows Madelung's rule by making each period correspond to one value of + , and regularises atomic number triads and the first-row anomaly trend. While he notes that its placement of helium atop the alkaline earth metals can be seen a disadvantage from a chemical perspective, he counters this by appealing to the first-row anomaly, pointing out that the periodic table "fundamentally reduces to quantum mechanics", and that it is concerned with "abstract elements" and hence atomic properties rather than macroscopic properties.
See also
Nucleosynthesis
Notes
References
Bibliography
Scerri, Eric R (2020). The Periodic Table, Its Story and Its Significance (2nd ed.). Oxford University Press, New York, .
Further reading
External links
Periodic Table featured topic page on Science History Institute Digital Collections featuring select visual representations of the periodic table of the elements, with an emphasis on alternative layouts including circular, cylindrical, pyramidal, spiral, and triangular forms.
IUPAC Periodic Table of the Elements
Dynamic periodic table, with interactive layouts
Eric Scerri, leading philosopher of science specializing in the history and philosophy of the periodic table
The Internet Database of Periodic Tables
Periodic table of endangered elements
Periodic table of samples
Periodic table of videos
WebElements
The Periodic Graphics of Elements
1869 works
Dmitri Mendeleev
Science education materials
Infographics
Tables (information) | Periodic table | [
"Physics",
"Chemistry"
] | 20,731 | [
"Periodic table",
"Chemical elements",
"Atoms",
"Matter"
] |
4,412,358 | https://en.wikipedia.org/wiki/Boojum%20%28superfluidity%29 | In the physics of superfluidity, a boojum is a geometric pattern on the surface of one of the phases of superfluid helium-3, whose motion can result in the decay of a supercurrent. A boojum can result from a monopole singularity in the bulk of the liquid being drawn to, and then "pinned" on a surface. Although superfluid helium-3 only exists within a few thousandths of a degree of absolute zero, boojums have also been observed forming in various liquid crystals, which exist at a far broader range of temperatures.
The boojum was named by N. David Mermin of Cornell University in 1976. He was inspired by Lewis Carroll's poem The Hunting of the Snark. As in the poem, the appearance of a boojum can cause something (in this case, the supercurrent) to "softly and suddenly vanish away". Other, less whimsical names had already been suggested for the phenomenon, but Mermin was persistent. After an exchange of letters that Mermin describes as both "lengthy and hilarious", the editors of Physical Review Letters agreed to his terminology. Research using the term "boojum" in a superfluid context was first published in 1977, and the term has since gained widespread acceptance in broader areas of physics. Its Russian phonetic equivalent is "budzhum", which is also well accepted by physicists.
The plural of the term is "boojums", a word initially disliked by Mermin (who at first used "booja") but one which is defined unambiguously by Carroll in his poem.
References
A collection of articles by David Mermin, including "E pluribus boojum".
External links
(5.4 MB).
Reprinted here by Univ of Southampton, dept of Economics:
Transcript of Mermin's 1999 lecture, in which he describes how he made "boojum" an internationally accepted scientific term:
Fluid dynamics
Superfluidity | Boojum (superfluidity) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 418 | [
"Physical phenomena",
"Phase transitions",
"Chemical engineering",
"Phases of matter",
"Superfluidity",
"Condensed matter physics",
"Exotic matter",
"Piping",
"Matter",
"Fluid dynamics"
] |
4,413,754 | https://en.wikipedia.org/wiki/Gravitational%20energy | Gravitational energy or gravitational potential energy is the potential energy a massive object has due to its position in a gravitational field. Mathematically, it is the minimum mechanical work that has to be done against the gravitational force to bring a mass from a chosen reference point (often an "infinite distance" from the mass generating the field) to some other point in the field, which is equal to the change in the kinetic energies of the objects as they fall towards each other. Gravitational potential energy increases when two objects are brought further apart and is converted to kinetic energy as they are allowed to fall towards each other.
Formulation
For two pairwise interacting point particles, the gravitational potential energy is the work that an outside agent must do in order to quasi-statically bring the masses together (which is therefore, exactly opposite the work done by the gravitational field on the masses):
where is the displacement vector of the mass, is gravitational force acting on it and denotes scalar product.
Newtonian mechanics
In classical mechanics, two or more masses always have a gravitational potential. Conservation of energy requires that this gravitational field energy is always negative, so that it is zero when the objects are infinitely far apart. The gravitational potential energy is the potential energy an object has because it is within a gravitational field.
The magnitude & direction of gravitational force experienced by a point mass , due to the presence of another point mass at a distance , is given by Newton's law of gravitation.
Taking origin to be at the position of ,To get the total work done by the gravitational force in bringing point mass from infinity to final distance (for example, the radius of Earth) from point mass , the force is integrated with respect to displacement:
Gravitational potential energy being the minimum (quasi-static) work that needs to be done against gravitational force in this procedure,
Simplified version for Earth's surface
In the common situation where a much smaller mass is moving near the surface of a much larger object with mass , the gravitational field is nearly constant and so the expression for gravitational energy can be considerably simplified. The change in potential energy moving from the surface (a distance from the center) to a height above the surface is
If is small, as it must be close to the surface where is constant, then this expression can be simplified using the binomial approximation
to
As the gravitational field is , this reduces to
Taking at the surface (instead of at infinity), the familiar expression for gravitational potential energy emerges:
General relativity
In general relativity gravitational energy is extremely complex, and there is no single agreed upon definition of the concept. It is sometimes modelled via the Landau–Lifshitz pseudotensor that allows retention for the energy–momentum conservation laws of classical mechanics. Addition of the matter stress–energy tensor to the Landau–Lifshitz pseudotensor results in a combined matter plus gravitational energy pseudotensor that has a vanishing 4-divergence in all frames—ensuring the conservation law. Some people object to this derivation on the grounds that pseudotensors are inappropriate in general relativity, but the divergence of the combined matter plus gravitational energy pseudotensor is a tensor.
See also
Gravitational binding energy
Gravitational potential
Gravitational potential energy storage
Positive energy theorem
References
Forms of energy
Gravity
Conservation laws
Tensors in general relativity
Potentials | Gravitational energy | [
"Physics",
"Engineering"
] | 659 | [
"Tensors",
"Equations of physics",
"Physical quantities",
"Conservation laws",
"Tensor physical quantities",
"Forms of energy",
"Energy (physics)",
"Tensors in general relativity",
"Symmetry",
"Physics theorems"
] |
4,414,307 | https://en.wikipedia.org/wiki/Overlap%20extension%20polymerase%20chain%20reaction | The overlap extension polymerase chain reaction (or OE-PCR) is a variant of PCR. It is also referred to as Splicing by overlap extension / Splicing by overhang extension (SOE) PCR. It is used to assemble multiple smaller double stranded DNA fragments into a larger DNA sequence. OE-PCR is widely used to insert mutations at specific points in a sequence or to assemble custom DNA sequence from smaller DNA fragments into a larger polynucleotide.
Splicing of DNA molecules
As in most PCR reactions, two primers—one for each end—are used per sequence. To splice two DNA molecules, special primers are used at the ends that are to be joined. For each molecule, the primer at the end to be joined is constructed such that it has a 5' overhang complementary to the end of the other molecule. Following annealing when replication occurs, the DNA is extended by a new sequence that is complementary to the molecule it is to be joined to. Once both DNA molecules are extended in such a manner, they are mixed and a PCR is carried out with only the primers for the far ends. The overlapping complementary sequences introduced will serve as primers and the two sequences will be fused. This method has an advantage over other gene splicing techniques in not requiring restriction sites.
To get higher yields, some primers are used in excess as in asymmetric PCR.
Introduction of mutations
To insert a mutation into a DNA sequence, a specific primer is designed. The primer may contain a single substitution or contain a new sequence at its 5' end. If a deletion is required, a sequence that is 5' of the deletion is added, because the 3' end of the primer must have complementarity to the template strand so that the primer can sufficiently anneal to the template DNA.
Following annealing of the primer to the template, DNA replication proceeds to the end of the template. The duplex is denatured and the second primer anneals to the newly formed DNA strand, containing sequence from the first primer. Replication proceeds to produce a strand of the required sequence, containing the mutation.
The duplex is denatured again and the first primer can now bind to the latest DNA strand. The replication reaction continues to produce a fully dimerised DNA fragment. After further PCR cycles, to amplify the DNA, the sample can be separated by agarose gel electrophoresis, followed by electroelution for collection.
Efficiently generating oligonucleotides beyond ~110 nucleotides in length is very difficult, so to insert a mutation further into a sequence than a 110 nt primer will allow, it is necessary to employ overlap extension PCR. In OE-PCR the sequence being modified is used to make two modified strands with the mutation at opposite ends, using the technique described above. After mixing and denaturation, the strands are allowed to anneal to produce three different combinations as detailed in the diagram. Only the duplex without overlap at the 5' end will allow extension by DNA polymerase in 3' to 5' direction.
Following the extension of the OE-PCR reaction, the PCR mix or the eluted fragments of appropriate size are subject to normal PCR, using the outermost primers used in the initial, mutagenic PCR reactions.
In addition, the combination of OE-PCR and asymmetric PCR could be used to improved the efficiency of site-directed mutagenesis.
Applications in molecular cloning
Besides the introduction of mutations, Overlap Extension PCR is widely used to assemble complex DNA sequences without the introduction of undesired nucleotides at any position. This is possible since OE-PCR relies on the utilization of complementary overhangs to guide the scarless splicing of custom DNA fragments in a desired order. This is the main advantage of OE-PCR and other long-homology based cloning methods such as Gibson assembly, which overcome the limitations of traditional restriction enzyme digestion and ligation cloning methods.
Assembly of custom DNA sequences with OE-PCR consists on three main steps. First, individual DNA sequences are amplified by PCR from different templates and flanked with the required complementary overhangs. Second, the formerly obtained PCR products are combined together into the overlap extension PCR reaction, where the complementary overhangs bind pair-wise allowing the polymerase to extend the DNA strand. Eventually, outer primers targeting the external overhangs are used and the desired DNA product is amplified in the final PCR reaction.
Technical Considerations
The overall success of OE-PCR based DNA assemblies relies on several factors, being the most relevant ones the instrinsic features of the DNA sequence to assemble, the sequence and length of the overlapping overhangs, the design of outer primers for the final amplification and the conditions of the PCR reaction. Normally, from 2 to 6 fragments can be spliced simultaneously into a single OE-PCR reaction. Overhangs should be at least 40 nucleotides long to ensure adequate interaction between fragments. Final amplification primers are commonly designed following general guidelines for PCR, however they are used in 2 to 5 times lower concentration than in standard PCR reactions, as it this has been shown to reduce undesired amplifications. Additionally the utilization of proofreading DNA polymerases is highly recommended.
References
Molecular biology
Laboratory techniques
Polymerase chain reaction
Genetic engineering
Molecular biology techniques | Overlap extension polymerase chain reaction | [
"Chemistry",
"Engineering",
"Biology"
] | 1,154 | [
"Biochemistry methods",
"Genetics techniques",
"Biological engineering",
"Polymerase chain reaction",
"Genetic engineering",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry"
] |
4,415,615 | https://en.wikipedia.org/wiki/DNA%20origami | DNA origami is the nanoscale folding of DNA to create arbitrary two- and three-dimensional shapes at the nanoscale. The specificity of the interactions between complementary base pairs make DNA a useful construction material, through design of its base sequences. DNA is a well-understood material that is suitable for creating scaffolds that hold other molecules in place or to create structures all on its own.
DNA origami was the cover story of Nature on March 16, 2006. Since then, DNA origami has progressed past an art form and has found a number of applications from drug delivery systems to uses as circuitry in plasmonic devices; however, most commercial applications remain in a concept or testing phase.
Overview
The idea of using DNA as a construction material was first introduced in the early 1980s by Nadrian Seeman. The method of DNA origami was developed by Paul Rothemund at the California Institute of Technology. In contrast to common top-down fabrication methods such as 3D printing or lithography which involve depositing or removing material through a tool, DNA Nanotechnology, as well as DNA Origami as a subset, is a bottom-up fabrication method. By rationally designing the constituent subunits of the DNA polymer, DNA can self-assemble into a variety of shapes. The process of constructing DNA Origami involves the folding of a long single strand of viral DNA (typically the 7,249 bp genomic DNA of M13 bacteriophage) aided by multiple smaller "staple" strands. These shorter strands bind the longer in various places, resulting in the formation of a pre-defined two- or three-dimensional shape. Examples include a smiley face and a coarse map of China and the Americas, along with many three-dimensional structures such as cubes.
There are several DNA properties that make the molecule an ideal building material for DNA origami. DNA strands have a natural tendency to bind to their complementary sequences through Watson–Crick base pairing. This allows staple strands to locate the position on the scaffold strand without any external manipulation, leading to self-assembly of the desired structure.
The specific sequence of bases in DNA gives the material an element of programmability by determining its binding behavior. Carefully designing the sequences of the staple strands enables scientists to precisely direct the scaffold strand's folding into a predetermined shape with high precision.
On a chemical level, the hydrogen bonds that exist between the complementary base pairs provide strength and stability to the folded DNA origami structures. Additionally, DNA is a relatively stable molecule, offering resilience in physiological conditions.
One of the advantages of using a DNA Origami nanostructure over an otherwise classified DNA nanostructure is the ease of defining finite structures. In the design of some other DNA nanostructures, it can be impractical to design the extremely large number of individualized strands if the entire structure is composed of smaller strands. One method of bypassing the need for a huge number of different strands is to use repeating units, which comes with the disadvantage of a distribution of sizes and sometimes shapes. DNA Origami, however, forms discrete structures.
Applications for DNA Origami are primarily focused around the ability to exert fine control on systems, especially by constraining positions of molecules, typically by attachment to the DNA Origami nanostructures. Current applications are primarily focused around sensing and drug delivery, but many additional applications have been investigated.
Fabrication
Fabrication of DNA origami objects requires a preliminary intuition of 3-dimensional DNA structural design. This can be difficult to grasp due to the complexity of exclusively using adenine-thymine pairings and guanine-cytosine pairings to both fold and unravel double helical DNA molecules such that the output strands produce uniquely desired shapes.
The design software and the choice of base-pair sequences become crucial for creating intricate 2D or even 3D shapes as the key to DNA origami lies in the precise base-pairing between the technique's two building blocks: staple strands and the scaffold. This ensures specific binding and accurate folding. A scaffold strand is a long, single-stranded DNA molecule, often sourced from a virus. Staple strands are shorter DNA strands designed to bind to specific sequences on the scaffold strand, dictating its folding.
To produce a desired shape, images are drawn with a raster fill of a single long DNA molecule. This design is then fed into a computer program that calculates the placement of individual staple strands. Each staple binds to a specific region of the DNA template, and thus due to Watson–Crick base pairing, the necessary sequences of all staple strands are known and displayed. The DNA is mixed, then heated and cooled. As the DNA cools, the various staples pull the long strand into the desired shape. Designs are directly observable via several methods, including electron microscopy, atomic force microscopy, or fluorescence microscopy when DNA is coupled to fluorescent materials.
Bottom-up self-assembly methods are considered promising alternatives that offer cheap, parallel synthesis of nanostructures under relatively mild conditions.
Since the creation of this method, software was developed to assist the process using CAD software. This allows researchers to use a computer to determine the way to create the correct staples needed to form a certain shape. One such software called caDNAno is an open source software for creating such structures from DNA. The use of software has not only increased the ease of the process but has also drastically reduced the errors made by manual calculations.
After meticulously planning the sequence of the staple strands with software to ensure they bind the scaffold strand at the intended points, the designed staple strand sequences are synthesized in a lab using techniques like automated DNA synthesis. Finally, the scaffold strand and staple strands are mixed in a buffer solution and subjected to a specific temperature cycle. This cycle allows the staple strands to find their complementary sequences on the scaffold strand and bind through hydrogen bonding, causing the scaffold to fold into the desired shape.
Dynamic Structures and Modifications
As in the broader field of DNA nanotechnology, DNA Origami may be made dynamic in nature through the use of a variety of methods. The three primary methods of creating a dynamic DNA Origami machine are toehold mediated strand displacement, enzymatic reactions, and base stacking. While these methods are most commonly used, additional methods for creating dynamic DNA Origami machines exist, such as designing a directional component and using brownian motion to drive rotational movement of structures or leveraging less commonly used DNA self-assembly phenomena like G-quadruplexes or i-motifs which can be pH sensitive.
Modifications can be otherwise used to affect structural properties, to impart unique chemistry to the nanostructures, or to add stimuli responses to the nanostructures. Modifications to structures can be made through conjugation of molecules such as proteins, or through chemical modification of the DNA bases themselves. pH dependent responses, light dependent responses, and more have been shown through modified systems.
One example application of creating dynamic structures is the ability to have a stimuli response resulting in drug release, which is presented by several groups. Other, less common applications comes in sensing moving mechanisms in vivo such as the unwinding of helicase.
Biomedical Applications
DNA Origami, being made of a natural biological polymer, is well suited to the biological environment when salt concentrations allow, and offers fine control over the positioning of molecules and structures in the system. This allows DNA Origami to be applicable to a number of scenarios in biomedical engineering. Current biomedical applications include drug release with 0 order mechanisms, vaccines, cell signaling, and sensing applications.
DNA is folded into an octahedron and coated with a single bilayer of phospholipid, mimicking the envelope of a virus particle. The DNA nanoparticles, each at about the size of a virion, are able to remain in circulation for hours after injected into mice. It also elicits much lower immune response than the uncoated particles. It presents a potential use in drug delivery, reported by researchers in Wyss Institute at Harvard University.
Researchers at the Harvard University Wyss Institute reported the self-assembling and self-destructing drug delivery vessels using the DNA origami in the lab tests. The DNA nanorobot they created is an open DNA tube with a hinge on one side which can be clasped shut. The drug filled DNA tube is held shut by a DNA aptamer, configured to identify and seek certain diseased related protein. Once the origami nanobots get to the infected cells, the aptamers break apart and release the drug. The first disease model the researchers used was leukemia and lymphoma.
Researchers in the National Center for Nanoscience and Technology in Beijing and Arizona State University reported a DNA origami delivery vehicle for Doxorubicin, a well-known anti-cancer drug. The drug was non-covalently attached to DNA origami nanostructures through intercalation and a high drug load was achieved. The DNA-Doxorubicin complex was taken up by human breast adenocarcinoma cancer cells (MCF-7) via cellular internalization with much higher efficiency than doxorubicin in free form. The enhancement of cell killing activity was observed not only in regular MCF-7, more importantly, also in doxorubicin-resistant cells. The scientists theorized that the doxorubicin-loaded DNA origami inhibits lysosomal acidification, resulting in cellular redistribution of the drug to action sites, thus increasing the cytotoxicity against the tumor cells. Further testing on in vivo on mice suggests that over a 12 day period, Doxorubicin was more effective at reducing tumor sizes in mice when it was contained in DNA Origami Nanostructures or DONs.
Researchers from the Massachusetts Institute of Technology are developing a method to attach various viral antigens to Virus-shaped DNA particles to mimic the virus to be used to develop new vaccines. This was started in 2016 when Bathe's lab created an algorithm known as DAEDALUS (DNA Origami Sequence Design Algorithm for User-defined Structures) to generate precision-controlled three-dimensional shapes of DNA. Using the tool they designed virus-shaped scaffolding that can modularly attach different antigens to the surface of the DNA scaffold. Currently, MIT is working to develop optimal geometries for B cells to recognize HIV antigens. Further research has attempted to replace HIV antigens with SARS-CoV-2 and are testing whether vaccines show proper immune response from isolated B cells and in mice.
Similarly, researchers from the Technical University of Munich have developed a method to have T-cells target tumor cells by using antigen coated DNA origami. The researchers developed a method to create chassis known as programmable T-cell Engagers or (PTEs) which are DNA Origami structures that can be configured to bind to user-defined target cells and T-cells based on which antigens are coated on the surfaces of the nanostructure. The in vitro results show that after 24 hours of exposure 90% of the tumor cells were destroyed. Meanwhile in vivo testing showed that their PTEs were capable of binding to the target proteins for several hours which validates the mechanism they designed.
Nanotechnology Applications
Many potential applications have been suggested in literature, including enzyme immobilization, drug delivery systems, and nanotechnological self-assembly of materials. Though DNA is not the natural choice for building active structures for nanorobotic applications, due to its lack of structural and catalytic versatility, several papers have examined the possibility of molecular walkers on origami and switches for algorithmic computing. The following paragraphs list some of the reported applications conducted in the laboratories with clinical potential.
In a study conducted by a group of scientists from iNANO center and CDNA Center at Aarhus university, researchers were able to construct a small multi-switchable 3D DNA Box Origami. The proposed nanoparticle was characterized by AFM, TEM and FRET. The constructed box was shown to have a unique reclosing mechanism, which enabled it to repeatedly open and close in response to a unique set of DNA or RNA keys. The authors proposed that this "DNA device can potentially be used for a broad range of applications such as controlling the function of single molecules, controlled drug delivery, and molecular computing."
Nanorobots made of DNA origami demonstrated computing capacities and completed pre-programmed task inside the living organism was reported by a team of bioengineers at Wyss Institute at Harvard University and Institute of Nanotechnology and Advanced Materials at Bar-Ilan University. As a proof of concept, the team injected various kinds of nanobots (the curled DNA encasing molecules with fluorescent markers) into live cockroaches. By tracking the markers inside the cockroaches, the team found the accuracy of delivery of the molecules (released by the uncurled DNA) in target cells, the interactions among the nanobots and the control are equivalent to a computer system. The complexity of the logic operations, the decisions and actions, increases with the increased number of nanobots. The team estimated that the computing power in the cockroach can be scaled up to that of an 8-bit computer.
A research group at the Indian Institute of Science used nanostructures to develop a platform to elucidate the coaxial stacking between DNA bases. This approach utilized DNA-PAINT based super-resolution microscopy for visualizing these DNA nanostructures and performed DNA binding kinetics analysis to elucidate the fundamental force of base-stacking that helps stabilize the DNA double helical structure. They went on to assemble multimeric DNA origami nanostructures termed as a 'three-point star' into a tetrahedral 3D origami structure. The assembly relied chiefly on base-stacking interactions between each subunit. The group further showed that the knowledge of such interactions can be used to predict and thus tune the relative stabilities of these multimeric DNA nanostructures.
Similar approaches
The idea of using protein design to accomplish the same goals as DNA origami has surfaced as well. Researchers at the National Institute of Chemistry in Slovenia are working on using rational design of protein folding to create structures much like those seen with DNA origami. The main focus of current research in protein folding design is in the drug delivery field, using antibodies attached to proteins as a way to create a targeted vehicle.
See also
RNA origami
DNA nanotechnology
Molecular self-assembly
Folding@home
Origami
References
Further reading
DNA nanotechnology
Genetics techniques | DNA origami | [
"Materials_science",
"Engineering",
"Biology"
] | 3,060 | [
"Genetics techniques",
"Nanotechnology",
"DNA nanotechnology",
"Genetic engineering"
] |
4,417,855 | https://en.wikipedia.org/wiki/Illusory%20contours | Illusory contours or subjective contours are visual illusions that evoke the perception of an edge without a luminance or color change across that edge. Illusory brightness and depth ordering often accompany illusory contours. Friedrich Schumann is often credited with the discovery of illusory contours around the beginning of the 20th century, but they are present in art dating to the Middle Ages. Gaetano Kanizsa’s 1976 Scientific American paper marked the resurgence of interest in illusory contours for vision scientists.
Common types of illusory contours
Kanizsa figures
Perhaps the most famous example of an illusory contour is the triangle configuration popularized by Gaetano Kanizsa.
Kanizsa figures trigger the percept of an illusory contour by aligning circles with wedge-shaped portions removed in the visual field such that the edges form a shape. Although not explicitly part of the image, Kanizsa figures evoke the perception of a shape, defined by a sharp illusory contour.
Typically, the shape seems brighter than the background, even though the luminance is in reality homogeneous. Additionally, the illusory shape seem to be closer to the viewer than the inducers. Kanizsa figures involve modal completion of the illusory shape and amodal completion of the inducers.
Ehrenstein illusion
Closely related to Kanizsa figures is the Ehrenstein illusion. Instead of employing circles with missing wedges, the Ehrenstein illusion triggers an illusory contour percept via radial line segments. Ehrenstein's discovery was originally contextualized as a modification of the Hermann grid.
Abutting line gratings
Illusory contours are created at the boundary between two misaligned gratings. In these so-called abutting line gratings, the illusory contour is perpendicular to the inducing elements.
In art and graphic design
Olympic logos from 1972, 1984, 1988, and 1994 all feature illusory contours, as does Ellsworth Kelly's 1950s series.
Jacob Gestman Geradts often used the Kanizsa illusion in his silkscreen prints, for instance in his work Formula 1 (1991).
Cortical responses
It is thought that early visual cortical regions such as V1 V2 in the visual system are responsible for forming illusory contours. Studies using human neuroimaging techniques have found that illusory contours are associated with activity in the deep layers of primary visual cortex.
Related visual phenomena
Visual illusions are useful stimuli for studying the neural basis of perception because they hijack the visual system's innate mechanisms for interpreting the visual world under normal conditions. For example, objects in the natural world are often only partially visible. Illusory contours provide clues for how the visual system constructs surfaces when portions of the surface's edge are not visible.
The encoding of surfaces is thought to be an indispensable part of visual perception, forming a critical intermediate stage of visual processing between the initial analysis of visual features and the ability to recognize complex stimuli like faces and scenes.
Amodal perception
Autostereogram
Filling-in
Gestalt psychology: 'closure'
Gestalt 'reification'
Negative space
Phantom contour
References
Further reading
External links
Illusory contours figures Many unpublished drawings (fr)
Optical illusions
Triangles | Illusory contours | [
"Physics"
] | 706 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
18,986,538 | https://en.wikipedia.org/wiki/Ehrenfest%20model | The Ehrenfest model (or dog–flea model) of diffusion was proposed by Tatiana and Paul Ehrenfest to explain the second law of thermodynamics. The model considers N particles in two containers. Particles independently change container at a rate λ. If X(t) = i is defined to be the number of particles in one container at time t, then it is a birth–death process with transition rates
for i = 1, 2, ..., N
for i = 0, 1, ..., N – 1
and equilibrium distribution .
Mark Kac proved in 1947 that if the initial system state is not equilibrium, then the entropy, given by
is monotonically increasing (H-theorem). This is a consequence of the convergence to the equilibrium distribution.
Interpretation of results
Consider that at the beginning all the particles are in one of the containers. It is expected that over time the number of particles in this container will approach and stabilize near that state (containers will have approximately the same number of particles). However from mathematical point of view, going back to the initial state is possible (even almost sure). From mean recurrence theorem follows that even the expected time to going back to the initial state is finite, and it is . Using Stirling's approximation one finds that if we start at equilibrium (equal number of particles in the containers), the expected time to return to equilibrium is asymptotically equal to . If we assume that particles change containers at rate one in a second, in the particular case of particles, starting at equilibrium the return to equilibrium is expected to occur in seconds, while starting at configuration in one of the containers, at the other, the return to that state is expected to take years. This supposes that while theoretically sure, recurrence to the initial highly disproportionate state is unlikely to be observed.
Bibliography
Paul and Tatjana Ehrenfest: Über zwei bekannte Einwände gegen das Boltzmannsche H-Theorem. Physikalische Zeitschrift, vol. 8 (1907), pp. 311–314.
F.P. Kelly: The Ehrenfest model, in Reversibility and Stochastic Networks. Wiley, Chichester, 1979. pp. 17–20.
David O. Siegmund: Ehrenfest model of diffusion (mathematics). Encyclopædia Britannica.
See also
Kac ring
Ornstein–Uhlenbeck process
References
Queueing theory
Diffusion
Stochastic models | Ehrenfest model | [
"Physics",
"Chemistry"
] | 522 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
12,308,932 | https://en.wikipedia.org/wiki/Quantum%20stirring%2C%20ratchets%2C%20and%20pumping | A pump is an alternating current-driven device that generates a direct current (DC). In the simplest configuration a pump has two leads connected to two reservoirs. In such open geometry, the pump takes particles from one reservoir and emits them into the other. Accordingly, a current is produced even if the reservoirs have the same temperature and chemical potential.
Stirring is the operation of inducing a circulating current with a non-vanishing DC component in a closed system. The simplest geometry is obtained by integrating a pump in a closed circuit. More generally one can consider any type of stirring mechanism such as moving a spoon in a cup of coffee.
Main observations
Pumping and stirring effects in quantum physics have counterparts in purely classical stochastic and dissipative processes. The studies of quantum pumping and of quantum stirring emphasize the role of quantum interference in the analysis of the induced current. A major objective is to calculate the amount of transported particles per a driving cycle. There are circumstances in which is an integer number due to the topology of parameter space. More generally is affected by inter-particle interactions, disorder, chaos, noise and dissipation.
Electric stirring explicitly breaks time-reversal symmetry. This property can be used to induce spin polarization in conventional semiconductors by purely electric means. Strictly speaking, stirring is a non-linear effect, because in linear response theory (LRT) an AC driving induces an AC current with the same frequency. Still an adaptation of the LRT Kubo formalism allows the analysis of stirring. The quantum pumping problem (where we have an open geometry) can be regarded as a special limit of the quantum stirring problem (where we have a closed geometry). Optionally the latter can be analyzed within the framework of scattering theory. Pumping and Stirring devices are close relatives of ratchet systems. The latter are defined in this context as AC driven spatially periodic arrays, where DC current is induced.
It is possible to induce a DC current by applying a bias, or if the particles are charged then by applying an electro-motive-force. In contrast to that a quantum pumping mechanism produces a DC current in response to a cyclic deformation of the confining potential. In order to have a DC current from an AC driving, time reversal symmetry (TRS) should be broken. In the absence of magnetic field and dissipation it is the driving itself that can break TRS. Accordingly, an adiabatic pump operation is based on varying more than one parameter, while for non-adiabatic pumps
modulation of a single parameter may suffice for DC current generation. The best known example is the peristaltic mechanism that combines a cyclic squeezing operation with on/off switching of entrance/exit valves.
Adiabatic quantum pumping is closely related to a class of current-driven nanomotors named Adiabatic quantum motor. While in a quantum pump, the periodic movement of some classical parameters pumps quantum particles from one reservoir to another, in a quantum motor a DC current of quantum particles induce the cyclic motion of the classical device. Said relation is due to the Onsager reciprocal relations between electric currents and current-induced forces , taken as generalized fluxes on one hand, and the chemical potentials biases and the velocity of the control parameters , taken as generalized forces on the other hand.,
.
where and are indexes over the mechanical degrees of freedom and the leads respectively, and
the subindex "" implies that the quantities should be evaluated at equilibrium, i.e. and . Integrating the above equation for a system with two leads yields the well known relation between the pumped charge per cycle , the work done by the motor , and the voltage bias ,
.
The Kubo approach to quantum stirring
Consider a closed system which is described by a Hamiltonian that depends on some control parameters . If is an Aharonov Bohm magnetic flux through the ring, then by Faraday law is the electro motive force. If linear response theory applies we have the proportionality , where is the called the Ohmic conductance. In complete analogy if we change the current is , and if we change the current is , where and are elements of a conductance matrix. Accordingly, for a full pumping cycle:
The conductance can be calculated and analyzed using the Kubo formula approach to quantum pumping, which is based on the theory of adiabatic processes. Here we write the expression that applies in the case of low frequency "quasi static" driving process (the popular terms "DC driving" and "adiabatic driving" turn out to be misleading so we do not use them):
where is the current operator, and is the generalized force that is associated with the control parameter . Though this formula is written using quantum mechanical notations it holds also classically if the commutator is replaced by Poisson brackets. In general can be written as a sum of two terms: one has to do with dissipation, while the other, denoted as has to do with geometry. The dissipative part vanishes in the strict quantum adiabatic limit, while the geometrical part might be non-zero. It turns out that in the strict adiabatic limit is the "Berry curvature" (mathematically known as ``two-form"). Using the notations and we can rewrite the formula for the amount of pumped particles as
where we define the normal vector as illustrated. The advantage of this point of view is in the intuition that it gives for the result: is related to the flux of a field which is created (so to say) by "magnetic charges" in space. In practice the calculation of is done using the following formula:
This formula can be regarded as the quantum adiabatic limit of the Kubo formula. The eigenstates of the system are labeled by the index . These are in general many body states, and the energies are in general many body energies. At finite temperatures a thermal average over is implicit. The field can be regarded as the rotor of "vector potential" (mathematically known as the "one-form"). Namely, . The ``Berry phase" which is acquired by a wavefunction at the end of a closed cycle is
Accordingly, one can argue that the "magnetic charge" that generates (so to say) the field consists of quantized "Dirac monopoles". It follows from gauge invariance that the degeneracies of the system are arranged as vertical Dirac chains. The "Dirac monopoles" are situated at points where has a degeneracy with another level. The Dirac monopoles picture is useful for charge transport analysis: the amount of transported charge is determined by the number of the Dirac chains encircled by the pumping cycle. Optionally it is possible to evaluate the transported charge per pumping cycle from the Berry phase by differentiating it with respect to the Aharonov–Bohm flux through the device.
The scattering approach to quantum pumping
The Ohmic conductance of a mesoscopic device that is connected by leads to reservoirs is given by the Landauer formula: in dimensionless units the Ohmic conductance of an open channel equals its transmission. The extension of this scattering point of view in the context of quantum pumping leads to the Brouwer-Buttiker-Pretre-Thomas (BPT) formula which relates the geometric conductance to the matrix of the pump. In the low temperature limit it yields
Here is a projector that restrict the trace operations to the open channels of the lead where the current is measured. This BPT formula has been originally derived using a scattering approach, but later its relation to the Kubo formula has been worked out.
The effect of interactions
A very recent work considers the role of interactions in the stirring of Bose condensed particles. Otherwise the rest of the literature concerns primarily electronic devices. Typically the pump is modeled as a quantum dot. The effect of electron–electron interactions within the dot region is taken into account in the Coulomb blockade regime or in the Kondo regime. In the former case charge transport is quantized even in the case of small backscattering. Deviation from the exact quantized value is related to dissipation. In the Kondo regime, as the temperature is lowered, the pumping effect is modified. There are also works that consider interactions over the whole system (including the leads) using the Luttinger liquid model.
Quantum pumping in deformable mesoscopic systems
A quantum pump, when coupled to classical mechanical degrees of freedom, may also induce cyclic variations of the mechanical degrees of freedom coupled to it. In such a configuration, the pump works similarly to an Adiabatic quantum motor. A paradigmatic example of this class of systems is a quantum pump coupled to an elastically deformable quantum dot. The mentioned paradigm has been generalized to include non-linear effects and stochastic fluctuations.
See also
Quantum mechanics
Brownian ratchet
Adiabatic quantum motor
References
Unsorted
B. L. Hazelzet, M. R. Wegewijs, T. H. Stoof, and Yu. V. Nazarov, Phys. Rev. B 63 (2001) 165313
O. Entin-Wohlman, A. Aharony and V. Kashcheyevs, Turk. J. Phys. 27 (2003) 371
J. N. H. J. Cremers and P. W. Brouwer Phys. Rev. B 65 (2002) 115333
I. L. Aleiner, B. L. Altshuler and A. Kamenev, Phys. Rev. B 62 (2000) 10373
E. R. Mucciolo, C. Chamon and C. M. Marcus Phys. Rev. Lett. 89 (2002) 146802
T. Aono Phys. Rev. B 67 (2003) 155303
O. Entin-Wohlman, Y. Levinson, and P. Wölfle Phys. Rev. B 64 (2001) 195308
F. Hekking and Yu. Nazarov, Phys Rev. B 44 (1991) 9110
F. Zhou, B. Spivak and B. Altshuler Phys. Rev. Lett. 82 (1990) 608
Y. Wei, J. Wang, and H. Guo, Phys. Rev. B 62 (2000) 9947
Y. Wei1, J. Wang, H. Guo, and C. Roland Phys. Rev. B 64 (2001) 115321
Q. Niu, Phys. Rev. B 34 (1986) 5093
J. A. Chiang and Q. Niu, Phys. Rev. A 57 (1998) 2278
F. Hekking and Yu. Nazarov, Phys Rev. B 44 (1991) 11506
M. G. Vavilov, V. Ambegaokar and I. Aleiner, Phys Rev. B 63 (2001) 195313
V. Kashcheyevs, A. Aharony, and O. Entin-Wohlman, Eur. Phys. J. B 39 (2004) 385
V. Kashcheyevs, A. Aharony, and O. Entin-Wohlman Phys. Rev. B 69 (2004) 195301
O. Entin-Wohlman, A. Aharony, and V. Kashcheyevs J. of the Physical Society of Japan 72, Supp. A (2003) 77
O. Entin-Wohlman and A. Aharony Phys. Rev. B 66 (2002) 035329
O. Entin-Wohlman, A. Aharony, and Y. Levinson Phys. Rev. B 65 (2002) 195411
Y. Levinson, O. Entin-Wohlman, and P. Wölfle Physica A 302 (2001) 335
L. E. F. Foa Torres Phys. Rev. B 72 (2005) 245339
Quantum mechanics | Quantum stirring, ratchets, and pumping | [
"Physics"
] | 2,483 | [
"Theoretical physics",
"Quantum mechanics"
] |
12,309,649 | https://en.wikipedia.org/wiki/Random%20close%20pack | Random close packing (RCP) of spheres is an empirical parameter used to characterize the maximum volume fraction of solid objects obtained when they are packed randomly. For example, when a solid container is filled with grain, shaking the container will reduce the volume taken up by the objects, thus allowing more grain to be added to the container. In other words, shaking increases the density of packed objects. But shaking cannot increase the density indefinitely, a limit is reached, and if this is reached without obvious packing into an ordered structure, such as a regular crystal lattice, this is the empirical random close-packed density for this particular procedure of packing. The random close packing is the highest possible volume fraction out of all possible packing procedures.
Experiments and computer simulations have shown that the most compact way to pack hard perfect same-size spheres randomly gives a maximum volume fraction of about 64%, i.e., approximately 64% of the volume of a container is occupied by the spheres. The problem of predicting theoretically the random close pack of spheres is difficult mainly because of the absence of a unique definition of randomness or disorder. The random close packing value is significantly below the maximum possible close-packing of same-size hard spheres into a regular crystalline arrangements, which is 74.04%. Both the face-centred cubic (fcc) and hexagonal close packed (hcp) crystal lattices have maximum densities equal to this upper limit, which can occur through the process of granular crystallisation.
The random close packing fraction of discs in the plane has also been considered a theoretically unsolved problem because of similar difficulties. An analytical, though not in closed form, solution to this problem was found in 2021 by R. Blumenfeld. The solution was found by limiting the probability of growth of ordered clusters to be exponentially small and relating it to the distribution of `cells', which are the smallest voids surrounded by connected discs. The derived maximum volume fraction is 85.3542%, if only hexagonal lattice clusters are disallowed, and 85.2514% if one disallows also deformed square lattice clusters.
An analytical and closed-form solution for both 2D and 3D, mechanically stable, random packings of spheres has been found by A. Zaccone in 2022 using the assumption that the most random branch of jammed states (maximally random jammed packings, extending up to the fcc closest packing) undergo crowding in a way qualitatively similar to an equilibrium liquid. The reasons for the effectiveness of this solution are the object of ongoing debate.
Definition
Random close packing of spheres does not have yet a precise geometric definition. It is defined statistically, and results are empirical. A container is randomly filled with objects, and then the container is shaken or tapped until the objects do not compact any further, at this point the packing state is RCP. The definition of packing fraction can be given as: "the volume taken by number of particles in a given space of volume". In other words, packing fraction defines the packing density. It has been shown that the filling fraction increases with the number of taps until the saturation density is reached. Also, the saturation density increases as the tapping amplitude decreases. Thus, RCP is the packing fraction given by the limit as the tapping amplitude goes to zero, and the limit as the number of taps goes to infinity.
Effect of object shape
The particle volume fraction at RCP depends on the objects being packed. If the objects are polydispersed then the volume fraction depends non-trivially on the size-distribution and can be arbitrarily close to 1. Still for (relatively) monodisperse objects the value for RCP depends on the object shape; for spheres it is 0.64, for M&M's candy it is 0.68.
For spheres
Example
Products containing loosely packed items are often labeled with this message: 'Contents May Settle During Shipping'.
Usually during shipping, the container will be bumped numerous times, which will increase the packing density.
The message is added to assure the consumer that the container is full on a mass basis, even though the container appears slightly empty. Systems of packed particles are also used as a basic model of porous media.
See also
Close-packing of equal spheres
Sphere packing
Cylinder sphere packing
References
Granularity of materials | Random close pack | [
"Physics",
"Chemistry"
] | 889 | [
"Particle technology",
"Materials",
"Granularity of materials",
"Matter"
] |
12,310,114 | https://en.wikipedia.org/wiki/Control%20banding | Control banding is a qualitative or semi-quantitative risk assessment and management approach to promoting occupational health and safety. It is intended to minimize worker exposures to hazardous chemicals and other risk factors in the workplace and to help small businesses by providing an easy-to-understand, practical approach to controlling hazardous exposures at work.
The principle of control banding was first applied to dangerous chemicals, chemical mixtures, and fumes. The control banding process emphasizes the controls needed to prevent hazardous substances from causing harm to people at work. The greater the potential for harm, the greater the degree of control needed to manage the situation and make the risk “acceptable.”
Control banding is particularly useful in circumstances where there are not established occupational or environmental exposure limits for a chemical. There are 219 million chemicals with a Chemical Abstracts Service (CAS) Registry Number, and less than 500 are regulated by the United States Occupational Safety and Health Administration (OSHA). Employers have a responsibility to protect their workers from harm regardless of whether a substance-specific standard exists, and control banding serves as a proactive approach to fulfilling this duty.
A single control technology or strategy is matched with a single band, or range of exposures (e.g. 1-10 milligrams per cubic meter) for a particular class of chemicals (e.g. skin irritants, reproductive hazards).
COSHH
In the United Kingdom, the Health and Safety Executive (HSE) has developed a comprehensive control banding model known as Control of Substances Hazardous to Health (COSHH) Essentials.
Below is an example of four control bands developed for inhalation hazards based on this method.
RISKOFDERM
RISKOFDERM was a project funded by the EU to develop a toolkit to assess the adequacy of control measures in place to protect against substances which could cause adverse dermal effects (i.e. irritation, burns, sensitization)
The toolkit was published in a paper version by the Instituto Nacional de Seguridad y Salud en el Trabajo in Spain. It asks the user a series of questions regarding what substance is being used, how it is used, and what controls are already in place. The user's answers will generate recommendations that vary from taking no additional action to stopping work immediately until the exposure can be reduced.
Respirable Crystalline Silica
The OSHA regulations for respirable crystalline silica in construction utilize control banding to specify what controls employers must implement when working with materials that contain crystalline silica like concrete.
For example, when working outdoors with jackhammers that provide a continuous stream or spray of water at the point of impact, employers are required to provide industrial respirators if the work will take place over more than 4 hours in a single shift. This qualitative method of implementing controls helps protect workers in environments that may vary day to day.
Biosafety Lab Levels
The Centers for Disease Control and Prevention (CDC) have established Biosafety Label Levels as a set of precautions to utilize when working with biological agents. These precautions are stratified based on the potential for these agents to cause disease and offer a qualitative method of ensuring the safety of laboratory workers and minimize the potential for accidental release.
Control banding is particular useful when working with biological agents in research environments where infectious dose may not be well defined. It also provides a standardized method of ensuring that a laboratory has appropriate controls in place prior to receiving authorization to begin new research projects.
Pharmaceuticals
The use of control banding strategies has become very popular in the pharmaceutical industry where early stage development compounds may have little or no toxicology data.
One control banding scheme in the pharmaceutical industry is proposed by Dr. Bruce Naumann. It involves assigning a chemical a Merck Performance-Based Exposure Control Limit (PB-ECL) Category based on its toxicological properties and then based on that category using predetermined controls.
Below is a table which compares portions of this method to the one proposed by COSHH.
Limitations of Control Banding
Control banding is not without limitations and still requires professional knowledge and experience to verify that the control measures specified have been properly installed, maintained, and used. Controls should be validated prior to use by either using substance specific industrial hygiene methods or performing surrogate monitoring.
See also
References
External links
NIOSH Safety and Health Topic: Control Banding
COSHH Essentials
IOHA Control Banding
BAuA: Easy-to-use control scheme for hazardous substances (EMKG)
BAuA: EMKG-Expo-Tool for Exposure Assessment (Workers)
Stoffenmanager
Chemwatch - Control Banding Risk Assessment Tool
Hazard analysis
Occupational safety and health | Control banding | [
"Engineering"
] | 966 | [
"Safety engineering",
"Hazard analysis"
] |
12,310,699 | https://en.wikipedia.org/wiki/4-Nitroaniline | 4-Nitroaniline, p-nitroaniline or 1-amino-4-nitrobenzene is an organic compound with the formula C6H6N2O2. A yellow solid, it is one of three isomers of nitroaniline. It is an intermediate in the production of dyes, antioxidants, pharmaceuticals, gasoline, gum inhibitors, poultry medicines, and as a corrosion inhibitor.
Synthesis
4-Nitroaniline is produced industrially via the amination of 4-nitrochlorobenzene:
ClC6H4NO2 + 2 NH3 → H2NC6H4NO2 + NH4Cl
Below is a laboratory synthesis of 4-nitroaniline from aniline. The key step in this reaction sequence is an electrophilic aromatic substitution to install the nitro group para to the amino group. The amino group can be easily protonated and become a meta director. Therefore, a protection of the acetyl group is required. After this reaction, a separation must be performed to remove 2-nitroaniline, which is also formed in a small amount during the reaction.
Applications
4-Nitroaniline is mainly consumed industrially as a precursor to p-phenylenediamine, an important dye component. The reduction is effected using iron metal and by catalytic hydrogenation.
It is a starting material for the synthesis of Para Red, the first azo dye:
It is also a precursor to 2,6-dichloro-4-nitroaniline, also used in dyes.
Laboratory use
Nitroaniline undergoes diazotization, which allows access to 1,4-dinitrobenzene and nitrophenylarsonic acid. With phosgene, it converts to 4-nitrophenylisocyanate.
Carbon snake demonstration
When heated with sulfuric acid, it dehydrates and polymerizes explosively into a rigid foam. The exact composition of the foam is unclear, but the process is believed to involve acidic protonation as well as displacement of the amine group by a sulfonic acid moiety.
In Carbon snake demo, paranitroaniline can be used instead of sugar, if the experiment is allowed to proceed under an obligatory fumehood. With this method the reaction phase prior to the black snake's appearance is longer, but once complete, the black snake itself rises from the container very rapidly. This reaction may cause an explosion if too much sulfuric acid is used.
Toxicity
The compound is toxic by way of inhalation, ingestion, and absorption, and should be handled with care. Its in rats is 750.0 mg/kg when administered orally. 4-Nitroaniline is particularly harmful to all aquatic organisms, and can cause long-term damage to the environment if released as a pollutant.
See also
2-Nitroaniline
3-Nitroaniline
References
External links
Safety (MSDS)data for p-nitroaniline
MSDS Sheet for p-nitroaniline
Sigma-Aldrich Catalog data
CDC - NIOSH Pocket Guide to Chemical Hazards
Anilines
Dyes
Hazardous air pollutants
IARC Group 3 carcinogens
Nitrobenzene derivatives
Corrosion inhibitors | 4-Nitroaniline | [
"Chemistry"
] | 681 | [
"Corrosion inhibitors",
"Process chemicals"
] |
12,311,837 | https://en.wikipedia.org/wiki/Bergeron%20diagram | The Bergeron diagram method is a method to evaluate the effect of a reflection on an electrical signal. This graphic method—based on the real characteristic of the line—is valid for both linear and non-linear models and helps to calculate the delay of an electromagnetic signal on an electric transmission line.
Using the Bergeron method, on the I-V characteristic chart, start from the regime point before the transition, then move along a straight line with a slope of Z0 (Z0 is the line's characteristic impedance) to the new characteristic; then move along lines with −Z0 or +Z0 slope until the new regime situation is reached.
The − value is considered always the same at every reflection because the Bergeron method is used only for first reflections.
The method was originally developed by a French hydraulic engineer, L. J. B. Bergeron, for analysing water hammer effects in hydraulic systems.
See also
Ringing (signal)
Signal reflection
External links
Detailed description of the Bergeron diagram method
Texas Instruments application reports AN-806 Data Transmission Lines and Their Characteristics and AN-807 Reflections: Computations and Waveforms, 2004
Telecommunications engineering | Bergeron diagram | [
"Engineering"
] | 236 | [
"Electrical engineering",
"Telecommunications engineering"
] |
12,312,576 | https://en.wikipedia.org/wiki/Analytic%20Fredholm%20theorem | In mathematics, the analytic Fredholm theorem is a result concerning the existence of bounded inverses for a family of bounded linear operators on a Hilbert space. It is the basis of two classical and important theorems, the Fredholm alternative and the Hilbert–Schmidt theorem. The result is named after the Swedish mathematician Erik Ivar Fredholm.
Statement of the theorem
Let be a domain (an open and connected set). Let be a real or complex Hilbert space and let Lin(H) denote the space of bounded linear operators from H into itself; let I denote the identity operator. Let be a mapping such that
B is analytic on G in the sense that the limit exists for all ; and
the operator B(λ) is a compact operator for each .
Then either
does not exist for any ; or
exists for every , where S is a discrete subset of G (i.e., S has no limit points in G). In this case, the function taking λ to is analytic on and, if , then the equation has a finite-dimensional family of solutions.
References
(Theorem 8.92)
Fredholm theory
Theorems in functional analysis
Theorems in complex analysis | Analytic Fredholm theorem | [
"Mathematics"
] | 237 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis",
"Theorems in functional analysis"
] |
12,312,944 | https://en.wikipedia.org/wiki/Azlon | Azlon is a synthetic textile fiber composed of protein material derived from natural sources such as soy, peanut, milk or corn. Currently it is used in clothing.
Regulation
Canada
Under the Textile Labeling and Advertising Regulations, Section 26(f), Azlon is defined as any fiber made from regenerated protein.
United States
The name "Azlon" is regulated by the Federal Trade Commission, § 303.7(g) Rules and Regulations Under the Textile Fiber Products Identification Act. However, there is currently no domestic production.
Azlon is the common generic name for all man-made protein fibers. Aralac was a registered trademark of Aralac, Inc., a division of National Dairy Products Corporation. Its production from unrationed skimmed-milk supplies may have contributed to its popularization during the Second World War.
United Kingdom
Azlon is also a brand of plastic labware. It is a registered trade mark of SciLabware Limited.
See also
Casein
Milk fiber
References
External links
Meet the Azlons from A to Z: Regenerated & Rejuvenated
Azlon Fiber
Synthetic fibers | Azlon | [
"Physics",
"Chemistry"
] | 225 | [
"Synthetic fibers",
"Materials stubs",
"Synthetic materials",
"Materials",
"Matter"
] |
12,313,191 | https://en.wikipedia.org/wiki/Limits%20of%20integration | In calculus and mathematical analysis the limits of integration (or bounds of integration) of the integral
of a Riemann integrable function defined on a closed and bounded interval are the real numbers and , in which is called the lower limit and the upper limit. The region that is bounded can be seen as the area inside and .
For example, the function is defined on the interval
with the limits of integration being and .
Integration by Substitution (U-Substitution)
In Integration by substitution, the limits of integration will change due to the new function being integrated. With the function that is being derived, and are solved for . In general,
where and . Thus, and will be solved in terms of ; the lower bound is and the upper bound is .
For example,
where and . Thus, and . Hence, the new limits of integration are and .
The same applies for other substitutions.
Improper integrals
Limits of integration can also be defined for improper integrals, with the limits of integration of both
and
again being a and b. For an improper integral
or
the limits of integration are a and ∞, or −∞ and b, respectively.
Definite Integrals
If , then
See also
Integral
Riemann integration
Definite integral
References
Integral calculus
Real analysis | Limits of integration | [
"Mathematics"
] | 249 | [
"Integral calculus",
"Calculus"
] |
12,313,670 | https://en.wikipedia.org/wiki/Maria%20reactor | The Maria reactor is Poland's second nuclear research reactor, commissioned in December 1974, and the only one still in use. The first was the Ewa reactor (EWA), which was commissioned in June 1958 and dismantled by 2002. It is located at Narodowe Centrum Badań Jądrowych - "NCBJ" (National Center for Nuclear Research) at Świerk-Otwock, near Warsaw and named in honor of Maria Skłodowska-Curie. It is the only reactor of Polish design.
Maria is a multifunctional research tool, with a notable application in the production of radioisotopes, research with utilization of neutron beams, neutron therapy, and neutron activation analysis. It operates about 4,000 hours annually, usually in blocks of 100 hours.
Technical description
The technical details of the reactor are given in the references.
Maria is a pool-type reactor with a power of 30 MW (thermal). Despite being a pool reactor, it contains channels (aluminum tubes) individually connected to the primary coolant. The water pool provides cooling for elements (e.g., fuel elements) that are not otherwise cooled, and also acts as radiation shielding. Maria uses enriched uranium as fuel (80% enrichment in 235U till 1999, and 36% since). The fuel elements and channels are vertical but arranged conically. Water and beryllium blocks serve as the moderator (70% and 30% of the moderation, respectively). Elements of boron carbide sheathed in aluminum are utilized for control, compensation, and safety. The use of beryllium blocks permits a comparatively large fuel lattice pitch, and consequently large volume for payload targets. There is also a graphite reflector (aluminum sheathed). Maria supplies a neutron flux of 4x1014 n/cm2s (thermal neutrons) and 2x1014 n/cm2s (fast neutrons). There are six horizontal channels for controlled use of neutron beams. There is also a window of lead-containing glass through which the core can be viewed. The reactor is housed in a sealed containment.
Following preparation which started in 2004, Maria was converted to use low-enriched uranium (LEU) fuel by 2012.
History
Construction began on June 16, 1970 and the reactor was activated on December 18, 1974. With the shutdown of the Ewa reactor in 1995 it became Poland's only research nuclear reactor.
In 2015, Maria was relicensed for an additional 10 years of operation, until 2025.
Production of medical radioisotopes
In February 2010, it was announced that Maria would start producing medical isotopes in cooperation with Covidien, to help ease the isotope shortages due to shutdowns of the Canadian NRU reactor and the Dutch Petten nuclear reactor.
See also
Anna reactor
List of nuclear reactors
References
Nuclear research reactors
Nuclear research institutes
Research institutes in Poland
Neutron facilities
External links
https://www.ncbj.gov.pl/en/maria-reactor | Maria reactor | [
"Engineering"
] | 619 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
12,315,477 | https://en.wikipedia.org/wiki/Indoor%20mold | Indoor mold (American English) or indoor mould (British English), also sometimes referred to as mildew, is a fungal growth that develops on wet materials in interior spaces. Mold is a natural part of the environment and plays an important part in nature by breaking down dead organic matter such as fallen leaves and dead trees; indoors, mold growth should be avoided. Mold reproduces by means of tiny spores. The spores are like seeds, but invisible to the naked eye, that float through the air and deposit on surfaces. When the temperature, moisture, and available nutrient conditions are correct, the spores can form into new mold colonies where they are deposited. There are many types of mold, but all require moisture and a food source for growth.
Health effects
Mold is ubiquitous, and mold spores are a common component of household and workplace dust. In large amounts they can lead to mold health issues to humans, potentially causing allergic reactions and respiratory diseases.
Symptoms
Symptoms of mold exposure may include nasal congestion; sinusitis; rhinorrhea, eye irritation; respiratory difficulties, such as wheezing, chest pain, cough, and persistent sneezing; throat irritation; skin irritation, such as a rash; and headache. Immunocompromised people and people with chronic lung illnesses, such as obstructive lung disease, may get serious infections in their lungs when they are exposed to mold. These people should stay away from areas that are likely to have mold, such as compost piles, cut grass, and wooded areas. More than half of adult workers in moldy/humid buildings suffer from nasal or sinus symptoms due to mold exposure.
Asthma
Infants in homes with mold have a much greater risk of developing asthma and allergic rhinitis. Infants may develop respiratory symptoms as a result of exposure to Penicillium, a fungal genus. Signs of mold-related respiratory problems in an infant include a persistent cough or wheeze.
Mold exposure has a variety of health effects, and sensitivity to mold varies. Exposure to mold may cause throat irritation, nasal stuffiness, eye irritation, cough and wheezing and skin irritation in some cases. Exposure to mold may heighten sensitivity, depending on the time and nature of exposure. People with chronic lung diseases are at higher risk for mold allergies, and will experience more severe reactions when exposed to mold. Damp indoor environments correlate with upper-respiratory-tract symptoms, such as coughing and wheezing in people with asthma.
Mycotoxins
Some mold produce mycotoxins, chemical components of their cell walls, that can pose serious health risks to humans and animals. "Toxic mold" refers to mold which produce mycotoxins, such as Stachybotrys chartarum. Exposure to high levels of mycotoxins can lead to neurological disorders and death. Prolonged exposure (for example, on a daily basis) can be particularly harmful. Mycotoxins can persist in the indoor environment even after death of the fungi. They can adhere to dust particles and can spread through the air attached to these dust particles or spores. There must be very specific temperature and humidity conditions in order for fungi to produce mycotoxins.
Causes and growing conditions
Mold is found everywhere and can grow on almost any substance when moisture is present. It reproduces by spores, which are carried by air currents. When spores land on a moist surface suitable for life, they begin to grow. Mold is normally found indoors at levels that do not affect most healthy individuals.
Because common building materials are capable of sustaining mold growth and mold spores are ubiquitous, mold growth in an indoor environment is typically related to water or moisture exposure and may be caused by incomplete drying of flooring materials (such as concrete). Flooding, leaky roofs, poor building maintenance, or indoor plumbing problems can lead to interior mold growth. Water vapor commonly condenses on surfaces cooler than the moisture-laden air, enabling mold to flourish. This moisture vapor passes through walls and ceilings, typically condensing during the winter in climates with a long heating season. Floors over crawl spaces and basements, without vapor barriers or with dirt floors, are mold-prone. The "doormat test" detects moisture from concrete slabs without a sub-slab vapor barrier. Inorganic materials, such as metal or polished concrete, do not support mold growth, although surface mold growth is still possible.
Significant mold growth requires moisture and food sources and a substrate capable of sustaining growth. Common cellulose-based building materials, such as plywood, drywall, furring strips, finish carpentry, cabinetry, wood framing, composite wood flooring, carpets, and carpet padding provide food for mold. In carpet, organic load such as invisible dust and cellulose are food sources. After water damage to a building, mold grows in walls and then becomes dormant until subsequent high humidity; suitable conditions reactivate mold. Mycotoxin levels are higher in buildings which have had a water incident.
Hidden mold
Mold is detectable by smell and signs of water damage on walls or ceiling and can grow in places invisible to the human eye. It may be found behind wallpaper or paneling, on the inside of dropped ceilings, the back of drywall, or the underside of carpets or carpet padding. Piping in walls may also be a source of mold, since they may leak (causing moisture and condensation).
Spores need three things to grow into mold: nutrients – cellulose (the cell wall of green plants) is a common food for indoor spores; moisture – to begin the decaying process caused by mold; and time – mold growth begins from 24 hours to 10 days after the provision of growing conditions.
Mold colonies can grow inside buildings, and the chief hazard is the inhalation of mycotoxins. After a flood or major leak, mycotoxin levels are higher – even after a building has dried out.
Food sources for mold in buildings include cellulose-based materials such as wood, cardboard and the paper facing on drywall and organic matter such as soap, textiles, and dust containing skin cells. If a house has mold, the moisture may originate in the basement or crawl space, a leaking roof or a leak in plumbing pipes. Insufficient ventilation may accelerate moisture buildup. Visible mold colonies may form where ventilation is poorest and on perimeter walls (because they are nearest the dew point).
If there are mold problems in a house only during certain times of the year, the house is probably too airtight or too drafty. Mold problems occur in airtight homes more frequently in the warmer months (when humidity is high inside the house, and moisture is trapped), and occur in drafty homes more frequently in the colder months (when warm air escapes from the living area and condenses). If a house is artificially humidified, by the use of a humidifier, during the winter, this can create conditions favorable to mold. Moving air may prevent mold from growing, since it has the same desiccating effect as low humidity. Mold grows best in warm temperatures, , although growth may occur between .
Removing one of the three requirements for mold reduces (or eliminates) new mold growth: moisture; food for the mold spores (for example, dust or dander); and warmth since mold generally does not grow in cold environments.
Heating, ventilation, and air conditioning (HVAC) systems can produce all three requirements for mold growth. The air conditioning system creates a difference in temperature, encouraging condensation. The high rate of dusty air movement through an HVAC system may furnish ample food for mold. Since the air-conditioning system is not always running, warm conditions are the final component for mold growth.
Prevention
Mold growth can be inhibited by keeping surfaces at conditions that are further from condensation, with relative humidity levels below 75%. This usually translates to a relative humidity of indoor air below 60%, in agreement with the guidelines for thermal comfort that recommend a relative humidity between 40 - 60 %. Moisture buildup in buildings may arise from water penetrating areas of the building envelope or fabric, from plumbing leaks, rainwater or groundwater penetration, or from condensation due to improper ventilation, insufficient heating or poor thermal quality of the building envelope. Even something as simple as drying clothes indoors on radiators can increase the risk of mold growth, if the humidity produced is not able to escape the building via ventilation.
Residential mold may be prevented and controlled by cleaning and repairing rain gutters, to prevent moisture seepage into the home; keeping air-conditioning drip pans clean and drainage lines clear; monitoring indoor humidity; drying areas of moisture or condensation and removing their sources; ensuring that there is adequate ventilation by installing an exhaust fan in your bathroom; treating exposed structural wood or wood framing with a fungicidal encapsulation coating after pre-cleaning (particularly homes with a crawl space, unfinished basement, or a poorly-ventilated attic).
Assessment
An observation of the indoor environment should be conducted before any sampling is performed. The area should be surveyed for odors indicating mold or bacterial growth, moisture sources, such as stagnant water or leaking pipes, and water-damaged building materials. This can include moving furniture, lifting (or removing) carpets, checking behind wallpaper or paneling, checking ventilation ductwork and exposing wall cavities. Efforts typically focus on areas where there are signs of liquid moisture or water vapor (humidity), or where moisture problems are suspected. In many cases, if materials have failed to dry out several days after the suspected water event, mold growth can be suspected even if it is not immediately visible. Often, quick decisions about the immediate safety and health of the environment can be made by these observations before sampling is even needed. The United States Environmental Protection Agency (EPA) does not generally recommend sampling unless an occupant of the space has symptoms. In most cases, if visible mold growth is present, sampling is unnecessary. Sampling should be performed by a trained professional with specific experience in mold-sampling protocols, sampling methods and the interpretation of findings. It should be done only to make a particular determination, such as airborne spore concentration or identifying a particular species.
Sampling
Before sampling, a subsequent course of action should be determined.
In the U.S., sampling and analysis should follow the recommendations of the Occupational Safety and Health Administration (OSHA), National Institute for Occupational Safety and Health (NIOSH), the EPA and the American Industrial Hygiene Association (AIHA). Types of samples include air, surface, bulk, dust, and swab. Multiple types of sampling are recommended by the AIHA, since each has limitations.
Air sampling
Air is the most common form of sampling to assess mold levels. Although, the Environmental Protection Agency (EPA) does not have any current testing protocols. Air sampling is considered to be the most representative method for assessing respiratory exposure to mold. Indoor and outdoor air are sampled, and their mold spore concentrations are compared. Indoor mold concentrations should be less than or equal to outdoor concentrations with similar distributions of species. A predominant difference in species or higher indoor concentrations can indicate poor indoor air quality and a possible health hazard. Air sampling can be used to identify hidden mold and is often used to assess the effectiveness of control measures after remediation. An indoor mold air sampling campaign should be performed over the course of at least several days as the environmental conditions can lead to variations in the day-to-day mold concentration. Stationary samplers assess a specific environment, such as a room or building, whereas personal samplers assess the mold exposure one person receives in all of the environments they enter over the course of sampling. Personal samplers can be attached to workers to assess their respiratory exposures to molds on the job. Personal samplers usually show higher levels of exposure than stationary samples due to the "personal cloud" effect, where the activities of the person re-suspend settled particles. There are several methods that can be used for indoor mold air sampling.
Swab and surface sampling
Surface sampling measures the number of mold spores deposited on indoor surfaces. With swab, a cotton swab is rubbed across the area being sampled, often a measured area, and subsequently sent to the mold testing laboratory. The swab can rubbed on an agar plate to grow the mold on a growth medium. Final results indicate mold levels and species located in the suspect area. Surface sampling can be used to identify the source of mold exposure. Molecular analyses, such as qPCR, may also be used for species identification and quantification. Swab and surface sampling can give detailed information about the mold, but cannot measure the actual mold exposure because it is not aerosolized.
Bulk and dust sampling
Bulk removal of material from the contaminated area is used to identify and quantify the mold in the sample. This method is often used to verify contamination and identify the source of contamination. Dust samples can be collected using a vacuum with a collection filter attached. Dust from surfaces such as floors, beds, or furniture is often collected to assess health effects from exposure in epidemiology studies. Researchers of indoor mold also use a long-term settled dust collection system where a dust cloth or a petri dish is left out in the environment for a set period of time, sometimes weeks. Dust samples can be analyzed using culture-based or culture-independent methods. Quantitative PCR is a DNA-based molecular method that can identify and quantify fungal species. The Environmental Relative Moldiness Index (ERMI) is a numerical that can be used in epidemiological studies to assess mold burdens of houses in the United States. The ERMI consists of a list of 36 fungal species commonly associated with damp houses that can be measured using qPCR. Like swab and surface sampling, bulk and dust sampling can give detailed information about the mold source, but cannot accurately determine the level of exposure to the source.
Remediation
In a situation where there is visible mold and the indoor air quality may have been compromised, mold remediation may be needed. The first step in solving an indoor mold problem is to remove the moisture source; new mold will begin to grow on moist, porous surfaces within 24 to 48 hours. There are a number of ways to prevent mold growth. Some cleaning companies specialize in fabric restoration, removing mold (and mold spores) from clothing to eliminate odor and prevent further damage to garments.
The effective way to clean mold is to use detergent solutions which physically remove mold. Many commercially available detergents marketed for mold cleanup include an antifungal agent.
Mold will start to grow once moisture and organic material come together. This can happen anywhere in a property including bathrooms, walls, garages, bedrooms, kitchens, etc. A smell is a good indicator that there is mold growth that needs immediate attention. If not attended to, the growth can spread through the property contributing to adverse health problems and causing secondary damage to the structure and its contents. Significant mold growth may require professional mold remediation to remove the affected building materials and eradicate the source of excess moisture. In extreme cases of mold growth in buildings, it may be more cost-effective to condemn the building than to reduce mold to safe levels.
The goals of remediation are to remove (or clean) contaminated materials, preventing fungi (and fungi-contaminated dust) from entering an occupied (or non-contaminated) area while protecting workers performing the abatement.
Cleanup and removal methods
The purpose of cleanup is to eliminate mold and remove contaminated materials. Killing mold with a biocide is insufficient, since chemical substances and proteins causing reactions in humans remain in dead mold. The following methods are used.
Evaluation: Before remediation, the area is assessed to ensure safety, clean up the entire moldy area, and properly approach the mold. The EPA provides the following instructions:
HVAC cleaning: Should be done by a trained professional.
Protective clothing: Includes a half- or full-face respirator. Goggles with a half-face respirator prevent mold spores from reaching the mucous membranes of the eyes. Disposable hazmat suits are available to keep out particles down to one micrometer, and personal protective equipment keep mold spores from entering skin cuts. Gloves are made of rubber, nitrile, polyurethane, or neoprene.
Dry brushing or agitation device: Wire brushing or sanding is used when microbial growth can be seen on solid wood surfaces such as framing or underlayment (the subfloor).
Dry-ice blasting: Removes mold from wood and cement; however, this process may spray mold and its byproducts into surrounding air.
Wet vacuum: Wet vacuuming is used on wet materials, and this method is one of those approved by the EPA.
Damp wipe: Removal of mold from non-porous surfaces by wiping or scrubbing with water and a detergent and drying quickly.
HEPA (high-efficiency particulate air) vacuum cleaner: Used in remediation areas after materials have been dried and contaminated materials removed; collected debris and dust is stored to prevent debris release.
Debris disposal: Sealed in the remediation area, debris is usually discarded with ordinary construction waste.
Equipment
Equipment used in mold remediation includes:
Moisture meter to measure drying of damaged materials;
Humidity gauge, often paired with a thermometer;
Borescope, a flexible tube with a camera at the end, to illuminate potential mold problems inside walls, ceilings and crawl spaces;
Digital camera to document findings during evaluation;
Personal protective equipment (PPE): respirators, gloves, impervious suit, and eye protection;
Thermographic camera, infrared thermal-imaging cameras to identify secondary moisture sources.
Hepa Vacuum, also known as High Efficiency Particulate Filter is commonly used with mold remediation for safely removing any mold spores from the surface and building materials to deter these pores from becoming airborne.
Protection levels
During mold remediation in the U.S., the level of contamination dictates the protection level for remediation workers. Contamination levels have been enumerated as I, II, III, and IV:
Level I: Small, isolated areas ( or less); remediation may be conducted by trained building staff;
Level II: Mid-sized, isolated areas (); may also be remediated by trained, protected building staff;
Level III: Large, isolated areas (): Professionals experienced in microbial investigations or mold remediation should be consulted, and personnel should be trained in the handling of hazardous materials and equipped with respiratory protection, gloves and eye protection;
Level IV: Extensive contamination (more than ); requires trained, equipped professionals
After remediation, the premises should be reevaluated to ensure success.
See also
Environmental engineering
Environmental health
Greenguard Environmental Institute
High-ozone shock treatment
House dust mite
Hurricane response
Occupational asthma
Sick building syndrome
Notes
External links
Environmental Protection Agency Mold Homepage
Building biology
Cleaning
Fungi and humans
Industrial hygiene
Occupational safety and health
Indoor air pollution | Indoor mold | [
"Chemistry",
"Engineering",
"Biology"
] | 3,887 | [
"Fungi",
"Building engineering",
"Surface science",
"Fungi and humans",
"Building biology",
"Cleaning",
"Humans and other species"
] |
17,932,042 | https://en.wikipedia.org/wiki/Critical%20field | For a given temperature, the critical field refers to the maximum magnetic field strength below which a material remains superconducting. Superconductivity is characterized both by perfect conductivity (zero resistance) and by the complete expulsion of magnetic fields (the Meissner effect). Changes in either temperature or magnetic flux density can cause the phase transition between normal and superconducting states. The highest temperature under which the superconducting state is seen is known as the critical temperature. At that temperature even the weakest external magnetic field will destroy the superconducting state, so the strength of the critical field is zero. As temperature decreases, the critical field increases generally to a maximum at absolute zero.
For a type-I superconductor the discontinuity in heat capacity seen at the superconducting transition is generally related to the slope of the critical field () at the critical temperature ():
There is also a direct relation between the critical field and the critical current – the maximum electric current density that a given superconducting material can carry, before switching into the normal state. According to Ampère's law any electric current induces a magnetic field, but superconductors exclude that field. On a microscopic scale, the magnetic field is not quite zero at the edges of any given sample – a penetration depth applies. For a type-I superconductor, the current must remain zero within the superconducting material (to be compatible with zero magnetic field), but can then go to non-zero values at the edges of the material on this penetration-depth length-scale, as the magnetic field rises. As long as the induced magnetic field at the edges is less than the critical field, the material remains superconducting, but at higher currents, the field becomes too strong and the superconducting state is lost. This limit on current density has important practical implications in applications of superconducting materials – despite zero resistance they cannot carry unlimited quantities of electric power.
The geometry of the superconducting sample complicates the practical measurement of the critical field – the critical field is defined for a cylindrical sample with the field parallel to the axis of radial symmetry. With other shapes (spherical, for example), there may be a mixed state with partial penetration of the exterior surface by the magnetic field (and thus partial normal state), while the interior of the sample remains superconducting.
Type-II superconductors allow a different sort of mixed state, where the magnetic field (above the lower critical field ) is allowed to penetrate along cylindrical "holes" through the material, each of which carries a magnetic flux quantum. Along these flux cylinders, the material is essentially in a normal, non-superconducting state, surrounded by a superconductor where the magnetic field goes back to zero. The width of each cylinder is on the order of the penetration depth for the material. As the magnetic field increases, the flux cylinders move closer together, and eventually at the upper critical field , they leave no room for the superconducting state and the zero-resistivity property is lost.
Upper critical field
The upper critical field is the magnetic flux density (usually expressed with the unit tesla (T)) that completely suppresses superconductivity in a type-II superconductor at 0 K (absolute zero).
More properly, the upper critical field is a function of temperature (and pressure) and if these are not specified, absolute zero and standard pressure are implied.
Werthamer–Helfand–Hohenberg theory predicts the upper critical field () at 0 K from and the slope of at .
The upper critical field (at 0 K) can also be estimated from the coherence length () using the Ginzburg–Landau expression: .
Lower critical field
The lower critical field is the magnetic flux density at which the magnetic flux starts to penetrate a type-II superconductor.
References
Superconductivity | Critical field | [
"Physics",
"Materials_science",
"Engineering"
] | 824 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
9,912,339 | https://en.wikipedia.org/wiki/RRS%20James%20Cook | RRS James Cook is a British Royal Research Ship operated by the Natural Environment Research Council (NERC). She was built in 2006 to replace the ageing with funds from Britain's NERC and the DTI's Large Scientific Facilities Fund. She was named after Captain James Cook, the British explorer, navigator and cartographer at the National Oceanography Centre, Southampton by Anne, Princess Royal.
On her maiden scientific voyage, on 5 March 2007, the James Cook set off to study the Fifteen-Twenty fracture zone.
James Cook was involved in the discovery of what is believed to be the world's deepest undersea volcanic vents, while in the Caribbean in 2010.
In September 2015, while on a cruise studying the seabed and marine life of the Whittard Canyon on the northern margin of the Bay of Biscay, oceanographers pictured what they believe was the first blue whale in English waters since the mammals were almost hunted to extinction in the north-east Atlantic.
In January 2020 she left Fort Lauderdale to take part in the Go-Ship programme of scientific expeditions, studying the changes in the physical and chemical make-up of the North Atlantic as a result of anthropogenic warming. The voyage ended at Tenerife in early March.
See also
– United States equivalent
References
External links
Skipsteknisk AS Design ST-345
National Oceanography Centre – Sea Systems – RRS James Cook, Southampton
Movie of the hull launch of the RRS James Cook in Gdansk, Poland
Natural Environment Research Council
Oceanographic instrumentation
Research vessels of the United Kingdom
2005 ships
Ships built in Gdańsk | RRS James Cook | [
"Technology",
"Engineering"
] | 325 | [
"Oceanographic instrumentation",
"Measuring instruments"
] |
9,913,028 | https://en.wikipedia.org/wiki/Annihilation%20radiation | Annihilation radiation is a term used in Gamma spectroscopy for the photon radiation produced when a particle and its antiparticle collide and annihilate. Most commonly, this refers to 511-keV photons produced by an electron interacting with a positron. These photons are frequently referred to as gamma rays, despite having their origin outside the nucleus, due to unclear distinctions between types of photon radiation. Positively charged electrons (Positrons) are emitted from the nucleus as it undergoes β+ decay. The positron travels a short distance (a few millimeters), depositing any excess energy before it combines with a free electron. The mass of the e- and e+ is completely converted into two photons with an energy of 511 keV each. These annihilation photons are emitted in opposite directions, 180˚ apart. This is the basis for PET scanners in a process called coincidence counting.
Annihilation radiation is not monoenergetic, unlike gamma rays produced by radioactive decay. The production mechanism of annihilation radiation introduces Doppler broadening. The annihilation peak produced in a photon spectrum by annihilation radiation therefore has a higher full width at half maximum (FWHM) than decay-generated gamma rays in spectrum. The difference is more apparent with high resolution detectors, such as Germanium detectors, than with low resolution detectors such as Sodium iodide detectors.
Because of their well-defined energy (511 keV) and characteristic, Doppler-broadened shape, annihilation radiation can often be useful in defining the energy calibration of a gamma ray spectrum.
References
Antimatter
Gamma rays | Annihilation radiation | [
"Physics"
] | 344 | [
"Antimatter",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Gamma rays",
"Matter"
] |
9,917,583 | https://en.wikipedia.org/wiki/X-ray%20diffraction | X-ray diffraction is a generic term for phenomena associated with changes in the direction of X-ray beams due to interactions with the electrons around atoms. It occurs due to elastic scattering, when there is no change in the energy of the waves. The resulting map of the directions of the X-rays far from the sample is called a diffraction pattern. It is different from X-ray crystallography which exploits X-ray diffraction to determine the arrangement of atoms in materials, and also has other components such as ways to map from experimental diffraction measurements to the positions of atoms.
This article provides an overview of X-ray diffraction, starting with the early history of x-rays and the discovery that they have the right spacings to be diffracted by crystals. In many cases these diffraction patterns can be Interpreted using a single scattering or kinematical theory with conservation of energy (wave vector). Many different types of X-ray sources exist, ranging from ones used in laboratories to higher brightness synchrotron light sources. Similar diffraction patterns can be produced by related scattering techniques such as electron diffraction or neutron diffraction. If single crystals of sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods include fiber diffraction, powder diffraction and (if the sample is not crystallized) small-angle X-ray scattering (SAXS).
History
When Wilhelm Röntgen discovered X-rays in 1895 physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation. The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first, naming them "A" and "B" and, supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom. X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. Albert Einstein introduced the photon concept in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed that X-rays are a form of electromagnetic radiation.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction pattern on a photographic plate. After being developed, the plate showed rings of fuzzy spots of roughly elliptical shape. Despite the crude and unclear image, the image confirmed the diffraction concept. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays).
After seeing the initial results, Laue was walking home and suddenly conceived of the physical laws describing the effect. Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914.
After Von Laue's pioneering research the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the scattering with evenly spaced planes within a crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms; see X-ray crystallography for more details.
Introduction to x-ray diffraction theory
Basics
Crystals are regular arrays of atoms, and X-rays are electromagnetic waves. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as elastic scattering, and the electron (or lighthouse) is known as the scatterer. A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions through destructive interference, they add constructively in a few specific directions.
An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and their spacing by d. William Lawrence Bragg proposed a model where the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ.
A reflection is said to be indexed when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters, the lengths and angles of the unit-cell, as well as its space group.
Ewald's sphere
Each X-ray diffraction pattern represents a spherical slice of reciprocal space, as may be seen by the Ewald sphere construction. For a given incident wavevector k0 the only wavevectors with the same energy lie on the surface of a sphere. In the diagram, the wavevector k1 lies on the Ewald sphere and also is at a reciprocal lattice vector g1 so satisfies Bragg's law. In contrast the wavevector k2 differs from the reciprocal lattice point and g2 by the vector s which is called the excitation error. For large single crystals primarily used in crystallography only the Bragg's law case matters; for electron diffraction and some other types of x-ray diffraction non-zero values of the excitation error also matter.
Scattering amplitudes
X-ray scattering is determined by the density of electrons within the crystal. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson scattering, the elastic interaction of an electromagnetic ray with a charged particle.
The intensity of Thomson scattering for one particle with mass m and elementary charge q is:
Hence the atomic nuclei, which are much heavier than an electron, contribute negligibly to the scattered X-rays. Consequently, the coherent scattering detected from an atom can be accurately approximated by analyzing the collective scattering from the electrons in the system.
The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, it will be represented here as a scalar wave. We will ignore the time dependence of the wave and just concentrate on the wave's spatial dependence. Plane waves can be represented by a wave vector kin, and so the incoming wave at time t = 0 is given by
At a position r within the sample, consider a density of scatterers f(r); these scatterers produce a scattered spherical wave of amplitude proportional to the local amplitude of the incoming wave times the number of scatterers in a small volume dV about r
where S is the proportionality constant.
Consider the fraction of scattered waves that leave with an outgoing wave-vector of kout and strike a screen (detector) at rscreen. Since no energy is lost (elastic, not inelastic scattering), the wavelengths are the same as are the magnitudes of the wave-vectors |kin| = |kout|. From the time that the photon is scattered at r until it is absorbed at rscreen, the photon undergoes a change in phase
The net radiation arriving at rscreen is the sum of all the scattered waves throughout the crystal
which may be written as a Fourier transform
where g = kout – kin is a reciprocal lattice vector that satisfies Bragg's law and the Ewald construction mentioned above. The measured intensity of the reflection will be square of this amplitude
The above assumes that the crystalline regions as somewhat large, for instance microns across, but also not so large that the X-rays are scattered more than once. If either of these is not the case then the diffracted intensities will be e more complicated.
X-ray sources
Rotating anode
Small scale diffraction experiments can be done with a local X-ray tube source, typically coupled with an image plate detector. These have the advantage of being relatively inexpensive and easy to maintain, and allow for quick screening and collection of samples. However, the wavelength of the X-rays produced is limited by the availability of different anode materials. Furthermore, the intensity is limited by the power applied and cooling capacity available to avoid melting the anode. In such systems, electrons are boiled off of a cathode and accelerated through a strong electric potential of ~50 kV; having reached a high speed, the electrons collide with a metal plate, emitting bremsstrahlung and some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper, which can be kept cool easily due to its high thermal conductivity, and which produces strong Kα and Kβ lines. The Kβ line is sometimes suppressed with a thin (~10 μm) nickel foil. The simplest and cheapest variety of sealed X-ray tube has a stationary anode (the Crookes tube) and runs with ~2 kW of electron beam power. The more expensive variety has a rotating-anode type source that runs with ~14 kW of e-beam power.
X-rays are generally filtered (by use of X-ray filters) to a single wavelength (made monochromatic) and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube) or with an arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with large unit cells (over 150 Å).
Microfocus tube
A more recent development is the microfocus tube, which can deliver at least as high a beam flux (after collimation) as rotating-anode sources but only require a beam power of a few tens or hundreds of watts rather than requiring several kilowatts.
Synchrotron radiation
Synchrotron radiation sources are some of the brightest light sources on earth and are some of the most powerful tools available for X-ray diffraction and crystallography. X-ray beams are generated in synchrotrons which accelerate electrically charged particles, often electrons, to nearly the speed of light and confine them in a (roughly) circular loop using magnetic fields.
Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected without interruption. Synchrotrons were originally designed for use by high-energy physicists studying subatomic particles and cosmic phenomena. The largest component of each synchrotron is its electron storage ring. This ring is not a perfect circle, but a many-sided polygon. At each corner of the polygon, or sector, precisely aligned magnets bend the electron stream. As the electrons' path is bent, they emit bursts of energy in the form of X-rays.
The intense ionizing radiation can cause radiation damage to samples, particularly macromolecular crystals. Cryo crystallography can protect the sample from radiation damage, by freezing the crystal at liquid nitrogen temperatures (~100 K). Cryocrystallography methods are applied to home source rotating anode sources as well. However, synchrotron radiation frequently has the advantage of user-selectable wavelengths, allowing for anomalous scattering experiments which maximizes anomalous signal. This is critical in experiments such as single wavelength anomalous dispersion (SAD) and multi-wavelength anomalous dispersion (MAD).
Free-electron laser
Free-electron lasers have been developed for use in X-ray diffraction and crystallography. These are the brightest X-ray sources currently available; with the X-rays coming in femtosecond bursts. The intensity of the source is such that atomic resolution diffraction patterns can be resolved for crystals otherwise too small for collection. However, the intense light source also destroys the sample, requiring multiple crystals to be shot. As each crystal is randomly oriented in the beam, hundreds of thousands of individual diffraction images must be collected in order to get a complete data set. This method, serial femtosecond crystallography, has been used in solving the structure of a number of protein crystal structures, sometimes noting differences with equivalent structures collected from synchrotron sources.
Related scattering techniques
Other X-ray techniques
Other forms of elastic X-ray scattering besides single-crystal diffraction include powder diffraction, small-angle X-ray scattering (SAXS) and several types of X-ray fiber diffraction, which was used by Rosalind Franklin in determining the double-helix structure of DNA. In general, single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently large and regular crystal, which is not always available.
These scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events (time resolved crystallography). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple atomic arrangements.
The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used to interpret the back reflection Laue photograph.
Electron diffraction
Because they interact via the Coulomb forces the scattering of electrons by matter is 1000 or more times stronger than for X-rays. Hence electron beams produce strong multiple or dynamical scattering even for relatively thin crystals (>10 nm). While there are similarities between the diffraction of X-rays and electrons, as can be found in the book by John M. Cowley, the approach is different as it is based upon the original approach of Hans Bethe and solving Schrödinger equation for relativistic electrons, rather than a kinematical or Bragg's law approach. Information about very small regions, down to single atoms is possible. The range of applications for electron diffraction, transmission electron microscopy and transmission electron crystallography with high energy electrons is extensive; see the relevant links for more information and citations. In addition to transmission methods, low-energy electron diffraction is a technique where electrons are back-scattered off surfaces and has been extensively used to determine surface structures at the atomic scale, and reflection high-energy electron diffraction is another which is extensively used to monitor thin film growth.
Neutron diffraction
Neutron diffraction is used for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although sources producing neutrons by spallation are becoming increasingly available. Being uncharged, neutrons scatter more from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is useful for observing the positions of light atoms with few electrons, especially hydrogen, which is essentially invisible in X-ray diffraction. Neutron scattering also has the property that the solvent can be made invisible by adjusting the ratio of normal water, H2O, and heavy water, D2O.
References
Laboratory techniques in condensed matter physics
Crystallography
Diffraction
Materials science
Synchrotron-related techniques
Crystallography | X-ray diffraction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,760 | [
"Applied and interdisciplinary physics",
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Laboratory techniques in condensed matter physics",
"Materials science",
"Crystallography",
"Diffraction",
"Condensed matter physics",
"nan",
"X-ray crystallography",
"Spectroscopy"
... |
16,750,945 | https://en.wikipedia.org/wiki/Multiple%20exciton%20generation | In solar cell research, carrier multiplication is the phenomenon wherein the absorption of a single photon leads to the excitation of multiple electrons from the valence band to conduction band. In the theory of a conventional solar cell, each photon is only able to excite one electron across the band gap of the semiconductor, and any excess energy in that photon is dissipated as heat. In a material with carrier multiplication, high-energy photons excite on average more than one electron across the band gap, and so in principle the solar cell can produce more useful work.
In quantum dot solar cells, the excited electron in the conduction band interacts with the hole it leaves behind in the valence band, and this composite uncharged object is known as an exciton. The carrier multiplication effect in a dot can be understood as creating multiple excitons, and is called multiple exciton generation (MEG). MEG may considerably increase the energy conversion efficiency of nanocrystal based solar cells, though extracting the energy may be difficult because of the short lifetimes of the multiexcitons.
The quantum mechanical origin of MEG is still under debate and several possibilities have been suggested:
1) Impact ionization: light excites a high-energy exciton (X) which decays irreversibly into a quasi-continuum of multiexciton (multi-X) states available at this energy. The model requires only the density of states of multiexcitons being very high, while the Coulomb coupling between X and multi-X can be quite small.
2) Coherent superposition of single and multiexciton states: the very first suggested model but oversimplified (high density of states of multi-X is not taken into account). Light excites an X (which is not a true eigenstate of the system) which can then coherently convert to multi-X and back to X many times (quantum beats). This process requires Coulomb coupling between them to be much stronger than the decay rate via phonons (which is usually not the case). The excitation will finally decay via phonons to a lower energy X or multi-X, depending on which of the decays is faster.
3) Multiexciton formation through a virtual exciton state. Light directly excites the eigenstate of the system (in this case, a coherent mixture of X and multi-X). The term "virtual" relates here to a pure X, because it is not a true eigenstate of the system (same for model 2).
All of the above models can be described by the same mathematical model (density matrix) which can behave differently depending on the set of initial parameters (coupling strength between the X and multi-X, density of states, decay rates).
MEG was first observed in 2004 using colloidal PbSe quantum dots and later was found in quantum dots of other compositions including PbS, PbTe, CdS, CdSe, InAs, Si, and InP. However, many early studies in colloidal quantum dots significantly overestimated the MEG effect due to undetected photocharging, an issue later identified and resolved by vigorously stirring colloidal samples. Multiple exciton generation was first demonstrated in a functioning solar cell in 2011, also using colloidal PbSe quantum dots. Multiple exciton generation was also detected in semiconducting single-walled carbon nanotubes (SWNTs) upon absorption of single photons. For (6,5) SWNTs, absorption of single photons with energies corresponding to three times the SWNT energy gap results in an exciton generation efficiency of 130% per photon. The multiple exciton generation threshold in SWNTs can be close to the limit defined by energy conservation.
Graphene, which is closely related to nanotubes, is another material in which multiple exciton generation has been observed.
Double-exciton generation has additionally been observed in organic pentacene derivatives through singlet exciton fission with extremely high quantum efficiency.
References
Quantum electronics | Multiple exciton generation | [
"Physics",
"Materials_science"
] | 850 | [
"Condensed matter physics",
"Nanotechnology",
"Quantum mechanics",
"Quantum electronics"
] |
16,759,410 | https://en.wikipedia.org/wiki/Hartree%20equation | In 1927, a year after the publication of the Schrödinger equation, Hartree formulated what are now known as the Hartree equations for atoms, using the concept of self-consistency that Lindsay had introduced in his study of many electron systems in the context of Bohr theory. Hartree assumed that the nucleus together with the electrons formed a spherically symmetric field. The charge distribution of each electron was the solution of the Schrödinger equation for an electron in a potential , derived from the field. Self-consistency required that the final field, computed from the solutions, was self-consistent with the initial field, and he thus called his method the self-consistent field method.
History
In order to solve the equation of an electron in a spherical potential, Hartree first introduced atomic units to eliminate physical constants. Then he converted the Laplacian from Cartesian to spherical coordinates to show that the solution was a product of a radial function and a spherical harmonic with an angular quantum number , namely . The equation for the radial function was
Hartree equation in mathematics
In mathematics, the Hartree equation, named after Douglas Hartree, is
in
where
and
The non-linear Schrödinger equation is in some sense a limiting case.
Hartree product
The wavefunction which describes all of the electrons, , is almost always too complex to calculate directly. Hartree's original method was to first calculate the solutions to Schrödinger's equation for individual electrons 1, 2, 3, , p, in the states , which yields individual solutions: . Since each is a solution to the Schrödinger equation by itself, their product should at least approximate a solution. This simple method of combining the wavefunctions of the individual electrons is known as the Hartree product:
This Hartree product gives us the wavefunction of a system (many-particle) as a combination of wavefunctions of the individual particles. It is inherently mean-field (assumes the particles are independent) and is the unsymmetrized version of the Slater determinant ansatz in the Hartree–Fock method. Although it has the advantage of simplicity, the Hartree product is not satisfactory for fermions, such as electrons, because the resulting wave function is not antisymmetric. An antisymmetric wave function can be mathematically described using the Slater determinant.
Derivation
Let's start from a Hamiltonian of one atom with Z electrons. The same method with some modifications can be expanded to a monoatomic crystal using the Born–von Karman boundary condition and to a crystal with a basis.
The expectation value is given by
Where the are the spins of the different particles.
In general we approximate this potential with a mean field which is also unknown and needs to be found together with the eigenfunctions of the problem. We will also neglect all relativistic effects like spin-orbit and spin-spin interactions.
Hartree derivation
At the time of Hartree the full Pauli exclusion principle was not yet invented, it was only clear the exclusion principle in terms of quantum numbers but it was not clear that the wave function of electrons shall be anti-symmetric.
If we start from the assumption that the wave functions of each electron are independent
we can assume that the total wave function is the product of the single wave functions and that the total charge density at position due to all electrons except i is
Where we neglected the spin here for simplicity.
This charge density creates an extra mean potential:
The solution can be written as the Coulomb integral
If we now consider the electron i this will also satisfy the time independent Schrödinger equation
This is interesting on its own because it can be compared with a single particle problem in a continuous medium where the dielectric constant is given by:
Where and
Finally, we have the system of Hartree equations
This is a non linear system of integro-differential equations, but it is interesting in a computational setting because we can solve them iteratively.
Namely, we start from a set of known eigenfunctions (which in this simplified mono-atomic example can be the ones of the hydrogen atom) and starting initially from the potential
computing at each iteration a new version of the potential from the charge density above and then a new version of the eigen-functions, ideally these iterations converge.
From the convergence of the potential we can say that we have a "self consistent" mean field, i.e. a continuous variation from a known potential with known solutions to an averaged mean field potential. In that sense the potential is consistent and not so different from the originally used one as ansatz.
Slater–Gaunt derivation
In 1928 J. C. Slater and J. A. Gaunt independently showed that given the Hartree product approximation:
They started from the following variational condition
where the are the Lagrange multipliers needed in order to minimize the functional of the mean energy . The orthogonal conditions acts as constraints in the scope of the lagrange multipliers. From here they managed to derive the Hartree equations.
Fock and Slater determinant approach
In 1930 Fock and Slater independently then used the Slater determinant instead of the Hartree product for the wave function
This determinant guarantees the exchange symmetry (i.e. if the two columns are swapped the determinant change sign) and the Pauli principle if two electronic states are identical there are two identical rows and therefore the determinant is zero.
They then applied the same variational condition as above
Where now the are a generic orthogonal set of eigen-functions from which the wave function is built. The orthogonal conditions acts as constraints in the scope of the lagrange multipliers. From this they derived the Hartree–Fock method.
References
Partial differential equations
Electronic structure methods
Quantum chemistry
Theoretical chemistry
Computational chemistry | Hartree equation | [
"Physics",
"Chemistry"
] | 1,196 | [
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Theoretical chemistry",
"Electronic structure methods",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
15,031,832 | https://en.wikipedia.org/wiki/CLDN17 | Claudin-17 is a protein that in humans is encoded by the CLDN17 gene. It belongs to the group of claudins; claudins are cell-cell junction proteins that keep that maintains cell- and tissue-barrier function. It forms anion-selective paracellular channels and is localized mainly in kidney proximal tubules.
References
External links
Further reading
Proteins
Genes | CLDN17 | [
"Chemistry"
] | 82 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
15,031,853 | https://en.wikipedia.org/wiki/EHF%20%28gene%29 | ETS homologous factor is a protein that in humans is encoded by the EHF gene.
This gene encodes a protein that belongs to an ETS transcription factor subfamily characterized by epithelial-specific expression (ESEs). The encoded protein acts as a transcriptional repressor and may be associated with asthma susceptibility. This protein may be involved in epithelial differentiation and carcinogenesis.
Further reading
Cangemi, R., Mensah, A., Albertini, V., Jain, A., Mello-Grand, M., Chiorino, G., Catapano, C.V. & Carbone, G.M. Reduced expression and tumor suppressor function of the ETS transcription factor ESE-3 in prostate cancer. Oncogene 27, 2877-2885 (2008).
Albino D, Longoni N, Curti L, Mello-Grand M, Pinton S, Civenni G, Thalmann G, D'Ambrosio G, Sarti M, Sessa F, Chiorino G, Catapano CV, Carbone GM. ESE3/EHF controls epithelial cell differentiation and its loss leads to prostate tumors with mesenchymal and stem-like features. Cancer Res. 2012 Jun 1;72(11):2889-900.
Kunderfranco, P., Mello-Grand, M., Cangemi, R., Pellini, S., Mensah, A., Albertini, V., Malek, A., Chiorino, G., Catapano, C.V. & Carbone, G.M. ETS transcription factors control transcription of EZH2 and epigenetic silencing of the tumor suppressor gene Nkx3.1 in prostate cancer. PLoS One 5, e10547 (2010).
References
External links
Transcription factors | EHF (gene) | [
"Chemistry",
"Biology"
] | 411 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,034,041 | https://en.wikipedia.org/wiki/Landing%20footprint | A landing footprint, also called a landing ellipse, is the area of uncertainty of a spacecraft's landing zone on an astronomical body. After atmospheric entry, the landing point of a spacecraft will depend upon the degree of control (if any), entry angle, entry mass, atmospheric conditions, and drag. (Note that the Moon and the asteroids have no aerial factors.) By aggregating such numerous variables it is possible to model a spacecraft's landing zone to a certain degree of precision. By simulating entry under varying conditions an probable ellipse can be calculated; the size of the ellipse represents the degree of uncertainty for a given confidence interval.
Mathematical explanation
To create a landing footprint for a spacecraft, the standard approach is to use the Monte Carlo method to generate distributions of initial entry conditions and atmospheric parameters, solve the reentry equations of motion, and catalog the final longitude/latitude pair at touchdown. It is commonly assumed that the resulting distribution of landing sites follows a bivariate Gaussian distribution:
where:
is the vector containing the longitude/latitude pair
is the expected value vector
is the covariance matrix
denotes the determinant of the covariance matrix
Once the parameters are estimated from the numerical simulations, an ellipse can be calculated for a percentile . It is known that for a real-valued vector with a multivariate Gaussian joint distribution, the square of the Mahalanobis distance has a chi-squared distribution with degrees of freedom:
This can be seen by defining the vector , which leads to and is the definition of the chi-squared statistic used to construct the resulting distribution. So for the bivariate Gaussian distribution, the boundary of the ellipse at a given percentile is . This is the equation of a circle centered at the origin with radius , leading to the equations:
where is the angle. The matrix square root can be found from the eigenvalue decomposition of the covariance matrix, from which can be written as:
where the eigenvalues lie on the diagonal of . The values of then define the landing footprint for a given level of confidence, which is expressed through the choice of percentile.
See also
List of landing ellipses on extraterrestrial bodies
Mars landing
Moon landing
References
Spaceflight concepts
Atmospheric entry
Statistical intervals | Landing footprint | [
"Engineering"
] | 476 | [
"Atmospheric entry",
"Aerospace engineering"
] |
15,034,662 | https://en.wikipedia.org/wiki/Mars%20MetNet | Mars MetNet was a planned atmospheric science mission to Mars, initiated by the Finnish Meteorological Institute (FMI) together with Russia and Spain. By September 2013, two flight-capable entry, descent and landing systems (EDLS) have been manufactured and tested. As of 2015 baseline funding exists until 2020. As of 2016, neither the launch vehicle nor precursory launch date have been set.
The objective is to establish a widespread surface observation network on Mars to investigate the planet's atmospheric structure, physics and meteorology. The bulk of the mission consist of at least 16 MetNet impact landers deployed over the Martian surface.
History
The basic concepts of Mars MetNet were initiated by the Finnish Meteorological Institute (FMI) team in late 1980s. The concept was matured over a decade, and eventually the development work started in the year 2000. MetNet can be considered as a successor of the NetLander, Russian Mars 96 and the earlier ESA Marsnet and InterMarsnet mission concepts. Of these Mars 96 went all the way to launch, but failure on the trans-mars injection with fourth stage of the rocket caused it to re-enter Earth and break-up. As part of this multi-part mission were two penetrators quite like MetNet. Main difference being that on the impact the front part would separate from the back and delve some meters deeper into ground.
MetNet was among the missions proposed at the European Geosciences Union General Assembly in April 2016.
Status
The scope of the Mars MetNet mission is eventually to deploy several tens of impact landers on the Martian surface. Mars MetNet is being developed by a consortium consisting of the Finnish Meteorological Institute (Mission Lead), the Russian Space Research Institute (IKI) (in cooperation with Lavochkin Association), and Instituto Nacional de Técnica Aeroespacial (INTA) from Spain.
The baseline program development funding exists until 2020. Definition of the precursory mission and discussions on launch opportunities are currently under way. The precursory mission would consist of one lander and is intended as a technology and science demonstration mission. If successful and if funded, more landers are proposed to be deployed in the following launch windows.
By 2013, all qualification activities had been completed and the payload and flight model components were being manufactured. By September 2013, two flight-capable entry, descent and landing systems (EDLS) had been manufactured and tested with acceptance levels. One of those two probes is being used for further environment tests, while a second is currently considered flight-worthy. The tests covered resistance to vibration, heat, and mechanical impact shock, and are ongoing as of April 2015. The test EDLS unit may later be refurbished for flight.
Scientific objectives
Detailed characterization of the Martian circulation patterns, boundary layer phenomena, and climatological cycles requires simultaneous in situ meteorological measurements from networks of stations on the Martian surface. The fact that both meteorology in particular and climatology in general vary both temporally and spatially means that the most effective means of monitoring these is to make simultaneous measurements at multiple locations and over a sufficiently long period of time. Mars MetNet includes both a global-scale, multi-point network of surface probes supplemented by a supporting satellite in orbit, for a projected duration of two Martian years. Somewhere in the range of ten to twenty observation points is seen as a minimum to get a good picture of atmospheric phenomena on a planet-wide scale.
Scientific objectives of the lander are to study:
Atmospheric dynamics and circulation
Surface to atmosphere interactions and planetary boundary layer
Dust raising mechanisms
Cycles of CO2, H2O and dust
Evolution of the Martian climate
The purpose of the Mars MetNet Precursor Mission is to confirm the concept of deployment for the mini-meteorological stations onto the Martian surface, to obtain atmospheric data during the descent phase, and to obtain information about the meteorology and surface structure at the landing site during one Martian year or longer.
Lander concept
Each MetNet lander, or impactor probe, will use an inflatable entry and descent system instead of rigid heat shields and parachutes as earlier semi-hard landing devices have used. This way the ratio of the payload mass to the overall mass is optimized, and more mass and volume resources are spared for the science payload. The MetNet lander's atmospheric descent process can be partitioned into two phases: the primary aerodynamic or the 'Inflatable Braking Unit' deceleration phase, and the secondary aerodynamic or the 'Additional Inflatable Braking Unit' deceleration phase. The probes will have a final landing speed of 44.6 to 57.6 m/s. The operational lifetime of a lander on the Martian surface will be seven years.
Deployment
As secondary payload
As the requirements for a transfer vehicle are not very extensive, the Mars MetNet impact landers could be launched with any mission going to Mars. The landers could piggyback on a Martian orbiter from ESA, NASA, Russia or China or an add-on to larger Martian landers like ExoMars.
Dedicated launch
Also a dedicated launch with several units from low Earth orbit is under study. Most of the Mars MetNet landers would be deployed to Mars separately a few weeks prior to the arrival to Mars to decrease the amount of required fuel for deceleration maneuvers. The satellite platform would then be inserted to an orbit around Mars and the last few Mars MetNet impact landers would be deployed to the Martian surface form the orbit around Mars to be able to land on any selected areas of the Martian surface in a latitude range of +/- 30 degrees for optimal solar panel efficiency. A sounder on board the orbiter would perform continuous atmospheric soundings, thus complementing the in situ observations. The orbiter will also serve as the primary data relay between the impact landers and the Earth.
Precursory mission
A technology demonstrator mission called 'Mars MetNet Precursory Mission' could be launched either piggy-backing with another Mars mission or with a dedicated launch using the Russian Volna — a converted submarine sea-launched ballistic missile.
The Finnish Meteorological Institute (FMI) originally planned to launch the demonstration lander on board the Phobos Grunt mission on 2011. However, the Mars MetNet lander was dropped from the Phobos-Grunt mission due to weight constraints on the spacecraft. Phobos-Grunt later failed to depart Earth orbit and crashed into the Pacific Ocean on January 16, 2012. The precursory mission launch date is yet to be determined.
Payload
The notional payload of the Mars MetNet Precursor Mission may include the following instruments:
MetBaro: pressure sensor with a 1015 hPa limit (100 g)
MetHumi: humidity sensor (15 g)
MetTemp: temperature sensor with a range from -110 °C to +30 °C (2 g)
Panoramic camera with four lenses mounted at 90° intervals (100 g)
MetSIS: a solar radiance sensor with an optical wireless communications system for data transfer
Dust Sensor: an infrared dust and gas detector (42 g)
Power
The impact landers are equipped with flexible solar panels, located on the upper side of the inflatable braking unit, that will provide approximately 0.6 W during the day. As the provided power output is insufficient to operate all instruments simultaneously, they are activated sequentially according to the different environmental constraints.
See also
Schiaparelli EDM lander, the 2016 ExoMars lander
ExoMars 2020 surface platform
References
External links
Animation video (58 seconds) of the hard landing sequence:
MetNet Website (checked 2016)
Missions to Mars
Meteorological instrumentation and equipment
Proposed space probes
Impactor spacecraft
Instituto Nacional de Técnica Aeroespacial | Mars MetNet | [
"Technology",
"Engineering"
] | 1,573 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
15,037,251 | https://en.wikipedia.org/wiki/Nucleic%20acid%20test | A nucleic acid test (NAT) is a technique used to detect a particular nucleic acid sequence and thus usually to detect and identify a particular species or subspecies of organism, often a virus or bacterium that acts as a pathogen in blood, tissue, urine, etc. NATs differ from other tests in that they detect genetic materials (RNA or DNA) rather than antigens or antibodies. Detection of genetic materials allows an early diagnosis of a disease because the detection of antigens and/or antibodies requires time for them to start appearing in the bloodstream. Since the amount of a certain genetic material is usually very small, many NATs include a step that amplifies the genetic material—that is, makes many copies of it. Such NATs are called nucleic acid amplification tests (NAATs). There are several ways of amplification, including polymerase chain reaction (PCR), strand displacement assay (SDA), transcription mediated assay (TMA), and loop-mediated isothermal amplification (LAMP).
Virtually all nucleic acid amplification methods and detection technologies use the specificity of Watson-Crick base pairing; single-stranded probe or primer molecules capture DNA or RNA target molecules of complementary strands. Therefore, the design of probe strands is highly significant to raise the sensitivity and specificity of the detection. However, the mutants which form the genetic basis for a variety of human diseases are usually slightly different from the normal nucleic acids. Often, they are only different in a single base, e.g., insertions, deletions, and single-nucleotide polymorphisms (SNPs). In this case, imperfect probe-target binding can easily occur, resulting in false-positive outcomes such as mistaking a strain that is commensal for one that is pathogenic. Much research has been dedicated to achieving single-base specificity.
Advances
Nucleic acid (DNA and RNA) strands with corresponding sequences stick together in pairwise chains, zipping up like Velcro tumbled in a clothes dryer. But each node of the chain is not very sticky, so the double-stranded chain is continuously coming partway unzipped and re-zipping itself under the influence of ambient vibrations (referred to as thermal noise or Brownian motion). Longer pairings are more stable. Nucleic acid tests use a "probe" which is a long strand with a short strand stuck to it. The long primer strand has a corresponding (complementary) sequence to a "target" strand from the disease organism being detected. The disease strand sticks tightly to the exposed part of the long primer strand (called the "toehold"), and then little by little, displaces the short "protector" strand from the probe. In the end, the short protector strand is not bound to anything, and the unbound short primer is detectable. The rest of this section gives some history of the research needed to fine-tune this process into a useful test.
In 2012, Yin's research group published a paper about optimizing the specificity of nucleic acid hybridization. They introduced a ‘toehold exchange probe (PC)’ which consists of a pre-hybridized complement strand C and a protector strand P. Complement strand is longer than protector strand to have unbound tail in the end, a toehold. Complement is perfectly complementary with the target sequence. When the correct target(X) reacts with the toehold exchange probe(PC), P is released and hybridized product XC is formed. The standard free energy(∆) of the reaction is close to zero. On the other hand, if the toehold exchange probe(PC) reacts with spurious target(S), the reaction forwards, but the standard free energy increases to be less thermodynamically favorable. The standard free energy difference(∆∆) is significant enough to give obvious discrimination in yield. The discrimination factor Q is calculated as, the yield of correct target hybridization divided by the yield of spurious target hybridization. Through the experiments on different toehold exchange probes with 5 correct targets and 55 spurious targets with energetically representative single-base changes (replacements, deletions, and insertions), Yin's group concluded that discrimination factors of these probes were between 3 and 100 + with the median 26. The probes function robustly from 10 °C to 37 °C, from 1 mM to 47 mM, and with nucleic acid concentrations from 1 nM to 5 M. They also figured out the toehold exchange probes work robustly even in RNA detection.
Further researches have been studied thereafter. In 2013, Seelig's group published a paper about fluorescent molecular probes which also utilizes the toehold exchange reaction. This enabled the optical detection of correct target and SNP target. They also succeeded in the detection of SNPs in E. coli-derived samples.
In 2015, David's group achieved extremely high (1,000+) selectivity of single-nucleotide variants (SNVs) by introducing the system called ‘competitive compositions’. In this system, they constructed a kinetic reaction model of the underlying hybridization processes to predict the optimal parameter values, which vary based on the sequences of SNV and wildtype (WT), on the design architecture of the probe and sink, and on the reagent concentrations and assay conditions. Their model succeeded in a median 890-fold selectivity for 44 cancer-related DNA SNVs, with a minimum of 200, which represents at least a 30-fold improvement over previous hybridization-based assays. In addition, they applied this technology to assay low VAF sequences from human genomic DNA following PCR, as well as directly to synthetic RNA sequences.
Based on the expertise, they developed a new PCR method called Blocker Displacement Amplification (BDA). It is a temperature-robust PCR which selectively amplifies all sequence variants within a roughly 20 nt window by 1000-fold over wildtype sequences, allowing easy detection and quantitation of hundreds of potentials variants originally at ≤ 0.1% allele frequency. BDA achieves similar enrichment performance across anneal temperatures ranging from 56 °C to 64 °C. This temperature robustness facilitates multiplexed enrichment of many different variants across the genome, and furthermore enables the use of inexpensive and portable thermocycling instruments for rare DNA variant detection. BDA has been validated even on sample types including clinical cell-free DNA samples collected from the blood plasma of lung cancer patients.
Applications
Diagnosis of gonococcal and other neisserian infections: amplification of specific N. gonorrhoeae DNA or RNA sequences for detection.
Diagnosis of urogenital C. trachomatis infections
Detection of Mycobacterium tuberculosis
Detection of HIV RNA or DNA
Detection of zoonotic coronaviruses
Diagnostic test for SARS-CoV-2
Detection of antibiotic resistant bacteria following antibiotic treatment
References
Genetics articles needing expert attention
Genetics techniques
Medical tests | Nucleic acid test | [
"Engineering",
"Biology"
] | 1,459 | [
"Genetics techniques",
"Genetic engineering"
] |
15,040,625 | https://en.wikipedia.org/wiki/Hydrogen%20valve | A hydrogen valve is a special type of valve that is used for hydrogen at very low temperatures or high pressures in hydrogen storage or for example hydrogen vehicles.
Types
High pressure ball valves up to 6000 psig (413 bar) at 250 degrees F (121 degrees C) and flow coefficients from 4.0 to 13.8.
Material
Valves used in industrial hydrogen and oxygen applications, such as petrochemical processes, are often made of inconel.
See also
Diaphragm valve
Gate valve
Hydrogen tank
References
External links
Hydrogen valve
Valves
Hydrogen technologies
Cryogenics | Hydrogen valve | [
"Physics",
"Chemistry"
] | 115 | [
"Applied and interdisciplinary physics",
"Cryogenics",
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
1,696,200 | https://en.wikipedia.org/wiki/Marching%20cubes | Marching cubes is a computer graphics algorithm, published in the 1987 SIGGRAPH proceedings by Lorensen and Cline, for extracting a polygonal mesh of an isosurface from a three-dimensional discrete scalar field (the elements of which are sometimes called voxels). The applications of this algorithm are mainly concerned with medical visualizations such as CT and MRI scan data images, and special effects or 3-D modelling with what is usually called metaballs or other metasurfaces. The marching cubes algorithm is meant to be used for 3-D; the 2-D version of this algorithm is called the marching squares algorithm.
History
The algorithm was developed by William E. Lorensen (1946-2019) and Harvey E. Cline as a result of their research for General Electric. At General Electric they worked on a way to efficiently visualize data from CT and MRI devices.
The premise of the algorithm is to divide the input volume into a discrete set of cubes. By assuming linear reconstruction filtering, each cube, which contains a piece of a given isosurface, can easily be identified because the sample values at the cube vertices must span the target isosurface value. For each cube containing a section of the isosurface, a triangular mesh that approximates the behavior of the trilinear interpolant in the interior cube is generated.
The first published version of the algorithm exploited rotational and reflective symmetry and also sign changes to build the table with 15 unique cases. However, due to the existence of ambiguities in the trilinear interpolant behavior in the cube faces and interior, the meshes extracted by the Marching Cubes presented discontinuities and topological issues. Given a cube of the grid, a face ambiguity occurs when its face vertices have alternating signs. That is, the vertices of one diagonal on this face are positive and the vertices on the other are negative. Observe that in this case, the signs of the face vertices are insufficient to determine the correct way to triangulate the isosurface. Similarly, an interior ambiguity occurs when the signs of the cube vertices are insufficient to determine the correct surface triangulation, i.e., when multiple triangulations are possible for the same cube configuration.
The popularity of the Marching Cubes and its widespread adoption resulted in several improvements in the algorithm to deal with the ambiguities and to correctly track the behavior of the interpolant. Durst in 1988 was the first to note that the triangulation table proposed by Lorensen and Cline was incomplete, and that certain Marching Cubes cases allow multiple triangulations. Durst's 'additional reference' was to an earlier, more efficient (see de Araujo) isosurface polygonization algorithm by Wyvill, Wyvill and McPheeters. Later, Nielson and Hamann in 1991 observed the existence of ambiguities in the interpolant behavior on the face of the cube. They proposed a test called Asymptotic Decider to correctly track the interpolant on the faces of the cube. In fact, as observed by Natarajan in 1994, this ambiguity problem also occurs inside the cube. In his work, the author proposed a disambiguation test based on the interpolant critical points, and added four new cases to the Marching Cubes triangulation table (subcases of the cases 3, 4, 6 and 7). At this point, even with all the improvements proposed to the algorithm and its triangulation table, the meshes generated by the Marching Cubes still had topological incoherencies.
The Marching Cubes 33, proposed by Chernyaev in 1995, is one of the first isosurface extraction algorithms intended to preserve the topology of the trilinear interpolant. In his work, Chernyaev extends to 33 the number of cases in the triangulation lookup table. He then proposes a different approach to solve the interior ambiguities, which is based on the Asymptotic Decider. Later, in 2003, Nielson proved that Chernyaev's lookup table is complete and can represent all the possible behaviors of the trilinear interpolant, and Lewiner et al. proposed an implementation to the algorithm. Also in 2003 Lopes and Brodlie extended the tests proposed by Natarajan. In 2013, Custodio et al. noted and corrected algorithmic inaccuracies that compromised the topological correctness of the mesh generated by the Marching Cubes 33 algorithm proposed by Chernyaev.
Algorithm
The algorithm proceeds through the scalar field, taking eight neighbor locations at a time (thus forming an imaginary cube), then determining the polygon(s) needed to represent the part of the isosurface that passes through this cube. The individual polygons are then fused into the desired surface.
This is done by creating an index to a precalculated array of 256 possible polygon configurations (28=256) within the cube, by treating each of the 8 scalar values as a bit in an 8-bit integer. If the scalar's value is higher than the iso-value (i.e., it is inside the surface) then the appropriate bit is set to one, while if it is lower (outside), it is set to zero. The final value, after all eight scalars are checked, is the actual index to the polygon indices array.
Finally each vertex of the generated polygons is placed on the appropriate position along the cube's edge by linearly interpolating the two scalar values that are connected by that edge.
The gradient of the scalar field at each grid point is also the normal vector of a hypothetical isosurface passing from that point. Therefore, these normals may be interpolated along the edges of each cube to find the normals of the generated vertices which are essential for shading the resulting mesh with some illumination model.
Patent issues
An implementation of the marching cubes algorithm was patented as United States Patent 4,710,876. Another similar algorithm was developed, called marching tetrahedra, in order to circumvent the patent as well as solve a minor ambiguity problem of marching cubes with some cube configurations. The patent expired in 2005, and it is now legal for the graphics community to use it without royalties since more than the 20 years have passed from its issue date (December 1, 1987).
Sources
See also
Image-based meshing
Marching tetrahedra
External links
. Some of the early history of Marching Cubes.
Computer graphics algorithms
3D computer graphics
Mesh generation
Implicit surface modeling | Marching cubes | [
"Physics"
] | 1,369 | [
"Tessellation",
"Mesh generation",
"Symmetry"
] |
1,697,331 | https://en.wikipedia.org/wiki/Nyquist%20stability%20criterion | In control theory and stability theory, the Nyquist stability criterion or Strecker–Nyquist stability criterion, independently discovered by the German electrical engineer at Siemens in 1930 and the Swedish-American electrical engineer Harry Nyquist at Bell Telephone Laboratories in 1932, is a graphical technique for determining the stability of a dynamical system.
Because it only looks at the Nyquist plot of the open loop systems, it can be applied without explicitly computing the poles and zeros of either the closed-loop or open-loop system (although the number of each type of right-half-plane singularities must be known). As a result, it can be applied to systems defined by non-rational functions, such as systems with delays. In contrast to Bode plots, it can handle transfer functions with right half-plane singularities. In addition, there is a natural generalization to more complex systems with multiple inputs and multiple outputs, such as control systems for airplanes.
The Nyquist stability criterion is widely used in electronics and control system engineering, as well as other fields, for designing and analyzing systems with feedback. While Nyquist is one of the most general stability tests, it is still restricted to linear time-invariant (LTI) systems. Nevertheless, there are generalizations of the Nyquist criterion (and plot) for non-linear systems, such as the circle criterion and the scaled relative graph of a nonlinear operator. Additionally, other stability criteria like Lyapunov methods can also be applied for non-linear systems.
Although Nyquist is a graphical technique, it only provides a limited amount of intuition for why a system is stable or unstable, or how to modify an unstable system to be stable. Techniques like Bode plots, while less general, are sometimes a more useful design tool.
Nyquist plot
A Nyquist plot is a parametric plot of a frequency response used in automatic control and signal processing. The most common use of Nyquist plots is for assessing the stability of a system with feedback. In Cartesian coordinates, the real part of the transfer function is plotted on the X-axis while the imaginary part is plotted on the Y-axis. The frequency is swept as a parameter, resulting in one point per frequency. The same plot can be described using polar coordinates, where gain of the transfer function is the radial coordinate, and the phase of the transfer function is the corresponding angular coordinate. The Nyquist plot is named after Harry Nyquist, a former engineer at Bell Laboratories.
Assessment of the stability of a closed-loop negative feedback system is done by applying the Nyquist stability criterion to the Nyquist plot of the open-loop system (i.e. the same system without its feedback loop). This method is easily applicable even for systems with delays and other non-rational transfer functions, which may appear difficult to analyze with other methods. Stability is determined by looking at the number of encirclements of the point (−1, 0). The range of gains over which the system will be stable can be determined by looking at crossings of the real axis.
The Nyquist plot can provide some information about the shape of the transfer function. For instance, the plot provides information on the difference between the number of zeros and poles of the transfer function by the angle at which the curve approaches the origin.
When drawn by hand, a cartoon version of the Nyquist plot is sometimes used, which shows the linearity of the curve, but where coordinates are distorted to show more detail in regions of interest. When plotted computationally, one needs to be careful to cover all frequencies of interest. This typically means that the parameter is swept logarithmically, in order to cover a wide range of values.
Background
The mathematics uses the Laplace transform, which transforms integrals and derivatives in the time domain to simple multiplication and division in the s domain.
We consider a system whose transfer function is ; when placed in a closed loop with negative feedback , the closed loop transfer function (CLTF) then becomes:
Stability can be determined by examining the roots of the desensitivity factor polynomial , e.g. using the Routh array, but this method is somewhat tedious. Conclusions can also be reached by examining the open loop transfer function (OLTF) , using its Bode plots or, as here, its polar plot using the Nyquist criterion, as follows.
Any Laplace domain transfer function can be expressed as the ratio of two polynomials:
The roots of are called the zeros of , and the roots of are the poles of . The poles of are also said to be the roots of the characteristic equation .
The stability of is determined by the values of its poles: for stability, the real part of every pole must be negative. If is formed by closing a negative unity feedback loop around the open-loop transfer function,
then the roots of the characteristic equation are also the zeros of , or simply the roots of .
Cauchy's argument principle
From complex analysis, a contour drawn in the complex plane, encompassing but not passing through any number of zeros and poles of a function , can be mapped to another plane (named plane) by the function . Precisely, each complex point in the contour is mapped to the point in the new plane yielding a new contour.
The Nyquist plot of , which is the contour will encircle the point of the plane times, where by Cauchy's argument principle. Here and are, respectively, the number of zeros of and poles of inside the contour . Note that we count encirclements in the plane in the same sense as the contour and that encirclements in the opposite direction are negative encirclements. That is, we consider clockwise encirclements to be positive and counterclockwise encirclements to be negative.
Instead of Cauchy's argument principle, the original paper by Harry Nyquist in 1932 uses a less elegant approach. The approach explained here is similar to the approach used by Leroy MacColl (Fundamental theory of servomechanisms 1945) or by Hendrik Bode (Network analysis and feedback amplifier design 1945), both of whom also worked for Bell Laboratories. This approach appears in most modern textbooks on control theory.
Definition
We first construct the Nyquist contour, a contour that encompasses the right-half of the complex plane:
a path traveling up the axis, from to .
a semicircular arc, with radius , that starts at and travels clock-wise to .
The Nyquist contour mapped through the function yields a plot of in the complex plane. By the argument principle, the number of clockwise encirclements of the origin must be the number of zeros of in the right-half complex plane minus the number of poles of in the right-half complex plane. If instead, the contour is mapped through the open-loop transfer function , the result is the Nyquist Plot of . By counting the resulting contour's encirclements of −1, we find the difference between the number of poles and zeros in the right-half complex plane of . Recalling that the zeros of are the poles of the closed-loop system, and noting that the poles of are same as the poles of , we now state the Nyquist Criterion:Given a Nyquist contour , let be the number of poles of encircled by , and be the number of zeros of encircled by . Alternatively, and more importantly, if is the number of poles of the closed loop system in the right half plane, and is the number of poles of the open-loop transfer function in the right half plane, the resultant contour in the -plane, shall encircle (clockwise) the point times such that .If the system is originally open-loop unstable, feedback is necessary to stabilize the system. Right-half-plane (RHP) poles represent that instability. For closed-loop stability of a system, the number of closed-loop roots in the right half of the s-plane must be zero. Hence, the number of counter-clockwise encirclements about must be equal to the number of open-loop poles in the RHP. Any clockwise encirclements of the critical point by the open-loop frequency response (when judged from low frequency to high frequency) would indicate that the feedback control system would be destabilizing if the loop were closed. (Using RHP zeros to "cancel out" RHP poles does not remove the instability, but rather ensures that the system will remain unstable even in the presence of feedback, since the closed-loop roots travel between open-loop poles and zeros in the presence of feedback. In fact, the RHP zero can make the unstable pole unobservable and therefore not stabilizable through feedback.)
The Nyquist criterion for systems with poles on the imaginary axis
The above consideration was conducted with an assumption that the open-loop transfer function does not have any pole on the imaginary axis (i.e. poles of the form ). This results from the requirement of the argument principle that the contour cannot pass through any pole of the mapping function. The most common case are systems with integrators (poles at zero).
To be able to analyze systems with poles on the imaginary axis, the Nyquist Contour can be modified to avoid passing through the point . One way to do it is to construct a semicircular arc with radius around , that starts at and travels anticlockwise to . Such a modification implies that the phasor travels along an arc of infinite radius by , where is the multiplicity of the pole on the imaginary axis.
Mathematical derivation
Our goal is to, through this process, check for the stability of the transfer function of our unity feedback system with gain k, which is given by
That is, we would like to check whether the characteristic equation of the above transfer function, given by
has zeros outside the open left-half-plane (commonly initialized as OLHP).
We suppose that we have a clockwise (i.e. negatively oriented) contour enclosing the right half plane, with indentations as needed to avoid passing through zeros or poles of the function . Cauchy's argument principle states that
Where denotes the number of zeros of enclosed by the contour and denotes the number of poles of by the same contour. Rearranging, we have
, which is to say
We then note that has exactly the same poles as . Thus, we may find by counting the poles of that appear within the contour, that is, within the open right half plane (ORHP).
We will now rearrange the above integral via substitution. That is, setting , we have
We then make a further substitution, setting . This gives us
We now note that gives us the image of our contour under , which is to say our Nyquist plot. We may further reduce the integral
by applying Cauchy's integral formula. In fact, we find that the above integral corresponds precisely to the number of times the Nyquist plot encircles the point clockwise. Thus, we may finally state that
We thus find that as defined above corresponds to a stable unity-feedback system when , as evaluated above, is equal to 0.
Importance
The Nyquist stability criterion is a graphical technique that determines the stability of a dynamical system, such as a feedback control system. It is based on the argument principle and the Nyquist plot of the open-loop transfer function of the system. It can be applied to systems that are not defined by rational functions, such as systems with delays. It can also handle transfer functions with singularities in the right half-plane, unlike Bode plots. The Nyquist stability criterion can also be used to find the phase and gain margins of a system, which are important for frequency domain controller design.
Summary
If the open-loop transfer function has a zero pole of multiplicity , then the Nyquist plot has a discontinuity at . During further analysis it should be assumed that the phasor travels times clockwise along a semicircle of infinite radius. After applying this rule, the zero poles should be neglected, i.e. if there are no other unstable poles, then the open-loop transfer function should be considered stable.
If the open-loop transfer function is stable, then the closed-loop system is unstable, if and only if, the Nyquist plot encircle the point −1 at least once.
If the open-loop transfer function is unstable, then for the closed-loop system to be stable, there must be one counter-clockwise encirclement of −1 for each pole of in the right-half of the complex plane.
The number of surplus encirclements (N + P greater than 0) is exactly the number of unstable poles of the closed-loop system.
However, if the graph happens to pass through the point , then deciding upon even the marginal stability of the system becomes difficult and the only conclusion that can be drawn from the graph is that there exist zeros on the axis.
See also
BIBO stability
Bode plot
Routh–Hurwitz stability criterion
Gain margin
Nichols plot
Hall circles
Phase margin
Barkhausen stability criterion
Circle criterion
Control engineering
Hankel singular value
References
Further reading
Faulkner, E. A. (1969): Introduction to the Theory of Linear Systems; Chapman & Hall;
Pippard, A. B. (1985): Response & Stability; Cambridge University Press;
Gessing, R. (2004): Control fundamentals; Silesian University of Technology;
Franklin, G. (2002): Feedback Control of Dynamic Systems; Prentice Hall,
External links
Applets with modifiable parameters
EIS Spectrum Analyser - a freeware program for analysis and simulation of impedance spectra
MATLAB function for creating a Nyquist plot of a frequency response of a dynamic system model.
PID Nyquist plot shaping - free interactive virtual tool, control loop simulator
Mathematica function for creating the Nyquist plot
The Nyquist Diagram for Electrical Circuits
Signal processing
Classical control theory
Stability theory | Nyquist stability criterion | [
"Mathematics",
"Technology",
"Engineering"
] | 2,884 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Stability theory",
"Dynamical systems"
] |
1,697,773 | https://en.wikipedia.org/wiki/Intercalation%20%28chemistry%29 | Intercalation is the reversible inclusion or insertion of a molecule (or ion) into layered materials with layered structures. Examples are found in graphite and transition metal dichalcogenides.
Examples
Graphite
One famous intercalation host is graphite, which intercalates potassium as a guest. Intercalation expands the van der Waals gap between sheets, which requires energy. Usually this energy is supplied by charge transfer between the guest and the host solid, i.e., redox. Two potassium graphite compounds are KC8 and KC24. Carbon fluorides (e.g., (CF)x and (C4F)) are prepared by reaction of fluorine with graphitic carbon. The color is greyish, white, or yellow. The bond between the carbon and fluorine atoms is covalent, thus fluorine is not intercalated. Such materials have been considered as a cathode in various lithium batteries.
Treating graphite with strong acids in the presence of oxidizing agents causes the graphite to oxidise. Graphite bisulfate, [C24]+[HSO4]−, is prepared by this approach using sulfuric acid and a little nitric acid or chromic acid. The analogous graphite perchlorate can be made similarly by reaction with perchloric acid.
Lithium-ion batteries
One of the largest and most diverse uses of the intercalation process by the early 2020s is in lithium-ion electrochemical energy storage, in the batteries used in many handheld electronic devices, mobility devices, electric vehicles, and utility-scale battery electric storage stations.
By 2023, all commercial Li-ion cells use intercalation compounds as active materials, and most use them in both the cathode and anode within the battery physical structure.
In 2012 three researchers, Goodenough, Yazami and Yoshino, received the 2012 IEEE Medal for Environmental and Safety Technologies for developing the intercalated lithium-ion battery and subsequently Goodenough, Whittingham, and Yoshino were awarded the 2019 Nobel Prize in Chemistry "for the development of lithium-ion batteries".
Exfoliation
An extreme case of intercalation is the complete separation of the layers of the material. This process is called exfoliation. Typically aggressive conditions are required involving highly polar solvents and aggressive reagents.
Related materials
In biochemistry, intercalation is the insertion of molecules between the bases of DNA. This process is used as a method for analyzing DNA and it is also the basis of certain kinds of poisoning.
Clathrates are chemical substances consisting of a lattice that traps or contains molecules. Usually, clathrate compounds are polymeric and completely envelop the guest molecule. Inclusion compounds are often molecules, whereas clathrates are typically polymeric. Intercalation compounds are not 3-dimensional, unlike clathrate compounds. According to IUPAC, clathrates are "Inclusion compounds in which the guest molecule is in a cage formed by the host molecule or by a lattice of host molecules."
See also
Clathrate compound: where a molecule is included into a lattice
Graphite intercalation compound
Intercalation (biochemistry)
Stacking (chemistry)
Hydrogen embrittlement
Notes
Supramolecular chemistry | Intercalation (chemistry) | [
"Chemistry",
"Materials_science"
] | 679 | [
"Nanotechnology",
"nan",
"Supramolecular chemistry"
] |
1,697,968 | https://en.wikipedia.org/wiki/Racemic%20acid | Racemic acid is an old name for an optically inactive or racemic form of tartaric acid. It is an equal mixture of two mirror-image isomers (enantiomers), optically active in opposing directions. Racemic acid does not occur naturally in grape juice, although L-tartaric acid does.
Tartaric acid's sodium-ammonium salt is unusual among racemic mixtures in that during crystallization it can separate out into two kinds of crystals, each composed of one isomer, and whose macroscopic crystalline shapes are mirror images of each other. Thus, Louis Pasteur was able in 1848 to isolate each of the two enantiomers by laboriously separating the two kinds crystals using delicate tweezers and a hand lens. Pasteur announced his intention to resolve racemic acid in:
Pasteur, Louis (1848) "Sur les relations qui peuvent exister entre la forme cristalline, la composition chimique et le sens de la polarisation rotatoire"
while he presented his resolution of racemic acid into separate optical isomers in:
Pasteur, Louis (1850) "Recherches sur les propriétés spécifiques des deux acides qui composent l'acide racémique"
In the latter paper, Pasteur sketches from natural concrete reality chiral polytopes quite possibly for the first time. The optical property of tartaric acid was first observed in 1832 by Jean Baptiste Biot, who observed its ability to rotate polarized light. It remains unknown whether Arthur Cayley or Ludwig Schläfli, or other contemporary mathematicians who studied polytopes, knew of the French work.
In two modern-day re-enactments performed in Japan of the Pasteur experiment, it was established that the preparation of crystals was not very reproducible. The crystals deformed, but they were large enough to inspect with the naked eye (microscope not required).
See also
Tartaric acid
Uvitic acid
Uvitonic acid
References
Alpha hydroxy acids
Chirality
Dicarboxylic acids
Food antioxidants
Optical materials
Racemic mixtures
Stereochemistry
Vicinal diols | Racemic acid | [
"Physics",
"Chemistry",
"Biology"
] | 451 | [
"Symmetry",
"Pharmacology",
"Racemic mixtures",
"Origin of life",
"Biochemistry",
"Stereochemistry",
"Chirality",
"Materials",
"Optical materials",
"Space",
"Chemical mixtures",
"nan",
"Asymmetry",
"Biological hypotheses",
"Spacetime",
"Matter"
] |
1,701,055 | https://en.wikipedia.org/wiki/Sol%E2%80%93gel%20process | In materials science, the sol–gel process is a method for producing solid materials from small molecules. The method is used for the fabrication of metal oxides, especially the oxides of silicon (Si) and titanium (Ti). The process involves conversion of monomers in solution into a colloidal solution (sol) that acts as the precursor for an integrated network (or gel) of either discrete particles or network polymers. Typical precursors are metal alkoxides. Sol–gel process is used to produce ceramic nanoparticles.
Stages
In this chemical procedure, a "sol" (a colloidal solution) is formed that then gradually evolves towards the formation of a gel-like diphasic system containing both a liquid phase and solid phase whose morphologies range from discrete particles to continuous polymer networks. In the case of the colloid, the volume fraction of particles (or particle density) may be so low that a significant amount of fluid may need to be removed initially for the gel-like properties to be recognized. This can be accomplished in any number of ways. The simplest method is to allow time for sedimentation to occur, and then pour off the remaining liquid. Centrifugation can also be used to accelerate the process of phase separation.
Removal of the remaining liquid (solvent) phase requires a drying process, which is typically accompanied by a significant amount of shrinkage and densification. The rate at which the solvent can be removed is ultimately determined by the distribution of porosity in the gel. The ultimate microstructure of the final component will clearly be strongly influenced by changes imposed upon the structural template during this phase of processing.
Afterwards, a thermal treatment, or firing process, is often necessary in order to favor further polycondensation and enhance mechanical properties and structural stability via final sintering, densification, and grain growth. One of the distinct advantages of using this methodology as opposed to the more traditional processing techniques is that densification is often achieved at a much lower temperature.
The precursor sol can be either deposited on a substrate to form a film (e.g., by dip-coating or spin coating), cast into a suitable container with the desired shape (e.g., to obtain monolithic ceramics, glasses, fibers, membranes, aerogels), or used to synthesize powders (e.g., microspheres, nanospheres). The sol–gel approach is a cheap and low-temperature technique that allows the fine control of the product's chemical composition. Even small quantities of dopants, such as organic dyes and rare-earth elements, can be introduced in the sol and end up uniformly dispersed in the final product. It can be used in ceramics processing and manufacturing as an investment casting material, or as a means of producing very thin films of metal oxides for various purposes. Sol–gel derived materials have diverse applications in optics, electronics, energy, space, (bio)sensors, medicine (e.g., controlled drug release), reactive material, and separation (e.g., chromatography) technology.
The interest in sol–gel processing can be traced back in the mid-1800s with the observation that the hydrolysis of tetraethyl orthosilicate (TEOS) under acidic conditions led to the formation of SiO2 in the form of fibers and monoliths. Sol–gel research grew to be so important that in the 1990s more than 35,000 papers were published worldwide on the process.
Particles and polymers
The sol–gel process is a wet-chemical technique used for the fabrication of both glassy and ceramic materials. In this process, the sol (or solution) evolves gradually towards the formation of a gel-like network containing both a liquid phase and a solid phase. Typical precursors are metal alkoxides and metal chlorides, which undergo hydrolysis and polycondensation reactions to form a colloid. The basic structure or morphology of the solid phase can range anywhere from discrete colloidal particles to continuous chain-like polymer networks.
The term colloid is used primarily to describe a broad range of solid-liquid (and/or liquid-liquid) mixtures, all of which contain distinct solid (and/or liquid) particles which are dispersed to various degrees in a liquid medium. The term is specific to the size of the individual particles, which are larger than atomic dimensions but small enough to exhibit Brownian motion. If the particles are large enough, then their dynamic behavior in any given period of time in suspension would be governed by forces of gravity and sedimentation. But if they are small enough to be colloids, then their irregular motion in suspension can be attributed to the collective bombardment of a myriad of thermally agitated molecules in the liquid suspending medium, as described originally by Albert Einstein in his dissertation. Einstein concluded that this erratic behavior could adequately be described using the theory of Brownian motion, with sedimentation being a possible long-term result. This critical size range (or particle diameter) typically ranges from tens of angstroms (10−10 m) to a few micrometres (10−6 m).
Under certain chemical conditions (typically in base-catalyzed sols), the particles may grow to sufficient size to become colloids, which are affected both by sedimentation and forces of gravity. Stabilized suspensions of such sub-micrometre spherical particles may eventually result in their self-assembly—yielding highly ordered microstructures reminiscent of the prototype colloidal crystal: precious opal.
Under certain chemical conditions (typically in acid-catalyzed sols), the interparticle forces have sufficient strength to cause considerable aggregation and/or flocculation prior to their growth. The formation of a more open continuous network of low density polymers exhibits certain advantages with regard to physical properties in the formation of high performance glass and glass/ceramic components in 2 and 3 dimensions.
In either case (discrete particles or continuous polymer network) the sol evolves then towards the formation of an inorganic network containing a liquid phase (gel). Formation of a metal oxide involves connecting the metal centers with oxo (M-O-M) or hydroxo (M-OH-M) bridges, therefore generating metal-oxo or metal-hydroxo polymers in solution.
In both cases (discrete particles or continuous polymer network), the drying process serves to remove the liquid phase from the gel, yielding a micro-porous amorphous glass or micro-crystalline ceramic. Subsequent thermal treatment (firing) may be performed in order to favor further polycondensation and enhance mechanical properties.
With the viscosity of a sol adjusted into a proper range, both optical quality glass fiber and refractory ceramic fiber can be drawn which are used for fiber optic sensors and thermal insulation, respectively. In addition, uniform ceramic powders of a wide range of chemical composition can be formed by precipitation.
Polymerization
The Stöber process is a well-studied example of polymerization of an alkoxide, specifically TEOS. The chemical formula for TEOS is given by Si(OC2H5)4, or Si(OR)4, where the alkyl group R = C2H5. Alkoxides are ideal chemical precursors for sol–gel synthesis because they react readily with water. The reaction is called hydrolysis, because a hydroxyl ion becomes attached to the silicon atom as follows:
Si(OR)4 + H2O → HO−Si(OR)3 + R−OH
Depending on the amount of water and catalyst present, hydrolysis may proceed to completion to silica:
Si(OR)4 + 2 H2O → SiO2 + 4 R−OH
Complete hydrolysis often requires an excess of water and/or the use of a hydrolysis catalyst such as acetic acid or hydrochloric acid. Intermediate species including [(OR)2−Si−(OH)2] or [(OR)3−Si−(OH)] may result as products of partial hydrolysis reactions. Early intermediates result from two partially hydrolyzed monomers linked with a siloxane [Si−O−Si] bond:
(OR)3−Si−OH + HO−Si−(OR)3 → [(OR)3Si−O−Si(OR)3] + H−O−H
or
(OR)3−Si−OR + HO−Si−(OR)3 → [(OR)3Si−O−Si(OR)3] + R−OH
Thus, polymerization is associated with the formation of a 1-, 2-, or 3-dimensional network of siloxane [Si−O−Si] bonds accompanied by the production of H−O−H and R−O−H species.
By definition, condensation liberates a small molecule, such as water or alcohol. This type of reaction can continue to build larger and larger silicon-containing molecules by the process of polymerization. Thus, a polymer is a huge molecule (or macromolecule) formed from hundreds or thousands of units called monomers. The number of bonds that a monomer can form is called its functionality. Polymerization of silicon alkoxide, for instance, can lead to complex branching of the polymer, because a fully hydrolyzed monomer Si(OH)4 is tetrafunctional (can branch or bond in 4 different directions). Alternatively, under certain conditions (e.g., low water concentration) fewer than 4 of the OR or OH groups (ligands) will be capable of condensation, so relatively little branching will occur. The mechanisms of hydrolysis and condensation, and the factors that bias the structure toward linear or branched structures are the most critical issues of sol–gel science and technology. This reaction is favored in both basic and acidic conditions.
Sono-Ormosil
Sonication is an efficient tool for the synthesis of polymers. The cavitational shear forces, which stretch out and break the chain in a non-random process, result in a lowering of the molecular weight and poly-dispersity. Furthermore, multi-phase systems are very efficient dispersed and emulsified, so that very fine mixtures are provided. This means that ultrasound increases the rate of polymerisation over conventional stirring and results in higher molecular weights with lower polydispersities. Ormosils (organically modified silicate) are obtained when silane is added to gel-derived silica during sol–gel process. The product is a molecular-scale composite with improved mechanical properties. Sono-Ormosils are characterized by a higher density than classic gels as well as an improved thermal stability. An explanation therefore might be the increased degree of polymerization.
Pechini process
For single cation systems like SiO2 and TiO2, hydrolysis and condensation processes naturally give rise to homogenous compositions. For systems involving multiple cations, such as strontium titanate, SrTiO3 and other perovskite systems, the concept of steric immobilisation becomes relevant. To avoid the formation of multiple phases of binary oxides as the result of differing hydrolysis and condensation rates, the entrapment of cations in a polymer network is an effective approach, generally termed the Pechini process. In this process, a chelating agent is used, most often citric acid, to surround aqueous cations and sterically entrap them. Subsequently, a polymer network is formed to immobilize the chelated cations in a gel or resin. This is most often achieved by poly-esterification using ethylene glycol. The resulting polymer is then combusted under oxidising conditions to remove organic content and yield a product oxide with homogeneously dispersed cations.
Nanomaterials, aerogels, xerogels
If the liquid in a wet gel is removed under a supercritical condition, a highly porous and extremely low density material called aerogel is obtained. Drying the gel by means of low temperature treatments (25–100 °C), it is possible to obtain porous solid matrices called xerogels. In addition, a sol–gel process was developed in the 1950s for the production of radioactive powders of UO2 and ThO2 for nuclear fuels, without generation of large quantities of dust.
Differential stresses that develop as a result of non-uniform drying shrinkage are directly related to the rate at which the solvent can be removed, and thus highly dependent upon the distribution of porosity. Such stresses have been associated with a plastic-to-brittle transition in consolidated bodies, and can yield to crack propagation in the unfired body if not relieved.
In addition, any fluctuations in packing density in the compact as it is prepared for the kiln are often amplified during the sintering process, yielding heterogeneous densification.
Some pores and other structural defects associated with density variations have been shown to play a detrimental role in the sintering process by growing and thus limiting end-point densities. Differential stresses arising from heterogeneous densification have also been shown to result in the propagation of internal cracks, thus becoming the strength-controlling flaws.
It would therefore appear desirable to process a material in such a way that it is physically uniform with regard to the distribution of components and porosity, rather than using particle size distributions which will maximize the green density. The containment of a uniformly dispersed assembly of strongly interacting particles in suspension requires total control over particle-particle interactions. Monodisperse colloids provide this potential.
Monodisperse powders of colloidal silica, for example, may therefore be stabilized sufficiently to ensure a high degree of order in the colloidal crystal or polycrystalline colloidal solid which results from aggregation. The degree of order appears to be limited by the time and space allowed for longer-range correlations to be established. Such defective polycrystalline structures would appear to be the basic elements of nanoscale materials science, and, therefore, provide the first step in developing a more rigorous understanding of the mechanisms involved in microstructural evolution in inorganic systems such as sintered ceramic nanomaterials.
Ultra-fine and uniform ceramic powders can be formed by precipitation. These powders of single and multiple component compositions can be produced at a nanoscale particle size for dental, biomedical, agrochemical, or catalytic applications. Powder abrasives, used in a variety of finishing operations, are made using a sol–gel type process. One of the more important applications of sol–gel processing is to carry out zeolite synthesis. Other elements (metals, metal oxides) can be easily incorporated into the final product and the silicate sol formed by this method is very stable. Semi-stable metal complexes can be used to produce sub-2 nm oxide particles without thermal treatment. During base-catalyzed synthesis, hydroxo (M-OH) bonds may be avoided in favor of oxo (M-O-M) using a ligand which is strong enough to prevent reaction in the hydroxo regime but weak enough to allow reaction in the oxo regime (see Pourbaix diagram).
Applications
The applications for sol gel-derived products are numerous. For example, scientists have used it to produce the world's lightest materials and also some of its toughest ceramics.
Protective coatings
One of the largest application areas is thin films, which can be produced on a piece of substrate by spin coating or dip-coating. Protective and decorative coatings, and electro-optic components can be applied to glass, metal and other types of substrates with these methods. Cast into a mold, and with further drying and heat-treatment, dense ceramic or glass articles with novel properties can be formed that cannot be created by any other method. Other coating methods include spraying, electrophoresis, inkjet printing, or roll coating.
Thin films and fibers
With the viscosity of a sol adjusted into a proper range, both optical and refractory ceramic fibers can be drawn which are used for fiber optic sensors and thermal insulation, respectively. Thus, many ceramic materials, both glassy and crystalline, have found use in various forms from bulk solid-state components to high surface area forms such as thin films, coatings and fibers. Also, thin films have found their application in the electronic field and can be used as sensitive components of a resistive gas sensors.
Controlled release
Sol-gel technology has been applied for controlled release of fragrances and drugs.
Opto-mechanical
Macroscopic optical elements and active optical components as well as large area hot mirrors, cold mirrors, lenses, and beam splitters can be made by the sol–gel route. In the processing of high performance ceramic nanomaterials with superior opto-mechanical properties under adverse conditions, the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during the synthesis or formation of the object. Thus a reduction of the original particle size well below the wavelength of visible light (~500 nm) eliminates much of the light scattering, resulting in a translucent or even transparent material.
Furthermore, microscopic pores in sintered ceramic nanomaterials, mainly trapped at the junctions of microcrystalline grains, cause light to scatter and prevented true transparency. The total volume fraction of these nanoscale pores (both intergranular and intragranular porosity) must be less than 1% for high-quality optical transmission, i.e. the density has to be 99.99% of the theoretical crystalline density.
See also
Coacervate, small spheroidal droplet of colloidal particles in suspension
Freeze-casting
Freeze gelation
Mechanics of gelation
Random graph theory of gelation
Liquid–liquid extraction
References
Further reading
Colloidal Dispersions, Russel, W. B., et al., Eds., Cambridge University Press (1989)
Glasses and the Vitreous State, Zarzycki. J., Cambridge University Press, 1991
The Sol to Gel Transition. Plinio Innocenzi. Springer Briefs in Materials. Springer. 2016.
External links
International Sol–Gel Society
The Sol–Gel Gateway
Ceramic engineering
Dosage forms
Gels
Glass chemistry
Glass coating and surface modification
Industrial processes
Thin film deposition
Transparent materials | Sol–gel process | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,800 | [
"Glass engineering and science",
"Physical phenomena",
"Glass chemistry",
"Thin film deposition",
"Coatings",
"Thin films",
"Colloids",
"Optical phenomena",
"Materials",
"Transparent materials",
"Gels",
"Ceramic engineering",
"Planes (geometry)",
"Solid state engineering",
"Glass coating... |
7,663,482 | https://en.wikipedia.org/wiki/Amino%20acid%20synthesis | Amino acid biosynthesis is the set of biochemical processes (metabolic pathways) by which the amino acids are produced. The substrates for these processes are various compounds in the organism's diet or growth media. Not all organisms are able to synthesize all amino acids. For example, humans can synthesize 11 of the 20 standard amino acids. These 11 are called the non-essential amino acids.
α-Ketoglutarates: glutamate, glutamine, proline, arginine
Most amino acids are synthesized from α-ketoacids, and later transaminated from another amino acid, usually glutamate. The enzyme involved in this reaction is an aminotransferase.
α-ketoacid + glutamate ⇄ amino acid + α-ketoglutarate
Glutamate itself is formed by amination of α-ketoglutarate:
α-ketoglutarate + ⇄ glutamate
The α-ketoglutarate family of amino acid synthesis (synthesis of glutamate, glutamine, proline and arginine) begins with α-ketoglutarate, an intermediate in the Citric Acid Cycle. The concentration of α-ketoglutarate is dependent on the activity and metabolism within the cell along with the regulation of enzymatic activity. In E. coli citrate synthase, the enzyme involved in the condensation reaction initiating the Citric Acid Cycle is strongly inhibited by α-ketoglutarate feedback inhibition and can be inhibited by DPNH as well high concentrations of ATP. This is one of the initial regulations of the α-ketoglutarate family of amino acid synthesis.
The regulation of the synthesis of glutamate from α-ketoglutarate is subject to regulatory control of the Citric Acid Cycle as well as mass action dependent on the concentrations of reactants involved due to the reversible nature of the transamination and glutamate dehydrogenase reactions.
The conversion of glutamate to glutamine is regulated by glutamine synthetase (GS) and is a key step in nitrogen metabolism. This enzyme is regulated by at least four different mechanisms: 1. Repression and depression due to nitrogen levels; 2. Activation and inactivation due to enzymatic forms (taut and relaxed); 3. Cumulative feedback inhibition through end product metabolites; and 4. Alterations of the enzyme due to adenylation and deadenylation. In rich nitrogenous media or growth conditions containing high quantities of ammonia there is a low level of GS, whereas in limiting quantities of ammonia the specific activity of the enzyme is 20-fold higher. The confirmation of the enzyme plays a role in regulation depending on if GS is in the taut or relaxed form. The taut form of GS is fully active but, the removal of manganese converts the enzyme to the relaxed state. The specific conformational state occurs based on the binding of specific divalent cations and is also related to adenylation. The feedback inhibition of GS is due to a cumulative feedback due to several metabolites including L-tryptophan, L-histidine, AMP, CTP, glucosamine-6-phosphate and carbamyl phosphate, alanine, and glycine. An excess of any one product does not individually inhibit the enzyme but a combination or accumulation of all the end products have a strong inhibitory effect on the synthesis of glutamine. Glutamine synthase activity is also inhibited via adenylation. The adenylation activity is catalyzed by the bifunctional adenylyltransferase/adenylyl removal (AT/AR) enzyme. Glutamine and a regulatory protein called PII act together to stimulate adenylation.
The regulation of proline biosynthesis can depend on the initial controlling step through negative feedback inhibition. In E. coli, proline allosterically inhibits Glutamate 5-kinase which catalyzes the reaction from L-glutamate to an unstable intermediate L-γ-Glutamyl phosphate.
Arginine synthesis also utilizes negative feedback as well as repression through a repressor encoded by the gene argR. The gene product of argR, ArgR an aporepressor, and arginine as a corepressor affect the operon of arginine biosynthesis. The degree of repression is determined by the concentrations of the repressor protein and corepressor level.
Erythrose 4-phosphate and phosphoenolpyruvate: phenylalanine, tyrosine, and tryptophan
Phenylalanine, tyrosine, and tryptophan, the aromatic amino acids, arise from chorismate. The first step, condensation of 3-deoxy-D-arabino-heptulosonic acid 7-phosphate (DAHP) from PEP/E4P, uses three isoenzymes AroF, AroG, and AroH. Each one of these has its synthesis regulated from tyrosine, phenylalanine, and tryptophan, respectively. The rest of the enzymes in the common pathway (conversion of DAHP to chorismate) appear to be synthesized constitutively, except for shikimate kinase, which can be inhibited by shikimate through linear mixed-type inhibition.
Tyrosine and phenylalanine are biosynthesized from prephenate, which is converted to an amino acid-specific intermediate. This process is mediated by a phenylalanine (PheA) or tyrosine (TyrA) specific chorismate mutase-prephenate dehydrogenase. PheA uses a simple dehydrogenase to convert prephenate to phenylpyruvate, while TyrA uses a NAD-dependent dehydrogenase to make 4-hydroxylphenylpyruvate. Both PheA and TyrA are feedback inhibited by their respective amino acids. Tyrosine can also be inhibited at the transcriptional level by the TyrR repressor. TyrR binds to the TyrR boxes on the operon near the promoter of the gene that it wants to repress.
Tryptophan biosynthesis involves conversion of chorismate to anthranilate using anthranilate synthase. This enzyme requires either glutamine as the amino group donor or ammonia itself. Anthranilate synthase is regulated by the gene products of trpE and trpG. trpE encodes the first subunit, which binds to chorismate and moves the amino group from the donor to chorismate. trpG encodes the second subunit, which facilitates the transfer of the amino group from glutamine. Anthranilate synthase is also regulated by feedback inhibition: tryptophan is a co-repressor to the TrpR repressor.
Oxaloacetate/aspartate: lysine, asparagine, methionine, threonine, and isoleucine
The oxaloacetate/aspartate family of amino acids is composed of lysine, asparagine, methionine, threonine, and isoleucine. Aspartate can be converted into lysine, asparagine, methionine and threonine. Threonine also gives rise to isoleucine.
The associated enzymes are subject to regulation via feedback inhibition and/or repression at the genetic level. As is typical in highly branched metabolic pathways, additional regulation at each branch point of the pathway. This type of regulatory scheme allows control over the total flux of the aspartate pathway in addition to the total flux of individual amino acids. The aspartate pathway uses L-aspartic acid as the precursor for the biosynthesis of one-fourth of the building block amino acids.
Aspartate
The biosynthesis of aspartate frequently involves the transamination of oxaloacetate.
The enzyme aspartokinase, which catalyzes the phosphorylation of aspartate and initiates its conversion into other amino acids, can be broken up into 3 isozymes, AK-I, II and III. AK-I is feed-back inhibited by threonine, while AK-II and III are inhibited by lysine. As a sidenote, AK-III catalyzes the phosphorylation of aspartic acid that is the committed step in this biosynthetic pathway. Aspartate kinase becomes downregulated by the presence of threonine or lysine.
Lysine
Lysine is synthesized from aspartate via the diaminopimelate (DAP) pathway. The initial two stages of the DAP pathway are catalyzed by aspartokinase and aspartate semialdehyde dehydrogenase. These enzymes play a key role in the biosynthesis of lysine, threonine, and methionine. There are two bifunctional aspartokinase/homoserine dehydrogenases, ThrA and MetL, in addition to a monofunctional aspartokinase, LysC. Transcription of aspartokinase genes is regulated by concentrations of the subsequently produced amino acids, lysine, threonine, and methionine. The higher these amino acids concentrations, the less the gene is transcribed. ThrA and LysC are also feed-back inhibited by threonine and lysine. Finally, DAP decarboxylase LysA mediates the last step of the lysine synthesis and is common for all studied bacterial species. The formation of aspartate kinase (AK), which catalyzes the phosphorylation of aspartate and initiates its conversion into other amino acids, is also inhibited by both lysine and threonine, which prevents the formation of the amino acids derived from aspartate. Additionally, high lysine concentrations inhibit the activity of dihydrodipicolinate synthase (DHPS). So, in addition to inhibiting the first enzyme of the aspartate families biosynthetic pathway, lysine also inhibits the activity of the first enzyme after the branch point, i.e. the enzyme that is specific for lysine's own synthesis.
Asparagine
The biosynthesis of asparagine originates with aspartate using a transaminase enzyme. The enzyme asparagine synthetase produces asparagine, AMP, glutamate, and pyrophosphate from aspartate, glutamine, and ATP. In the asparagine synthetase reaction, ATP is used to activate aspartate, forming β-aspartyl-AMP. Glutamine donates an ammonium group, which reacts with β-aspartyl-AMP to form asparagine and free AMP.
Two asparagine synthetases are found in bacteria. Both are referred to as the AsnC protein. They are coded for by the genes AsnA and AsnB. AsnC is autogenously regulated, which is where the product of a structural gene regulates the expression of the operon in which the genes reside. The stimulating effect of AsnC on AsnA transcription is downregulated by asparagine. However, the autoregulation of AsnC is not affected by asparagine.
Methionine
Biosynthesis by the transsulfuration pathway starts with aspartic acid. Relevant enzymes include aspartokinase, aspartate-semialdehyde dehydrogenase, homoserine dehydrogenase, homoserine O-transsuccinylase, cystathionine-γ-synthase, Cystathionine-β-lyase (in mammals, this step is performed by homocysteine methyltransferase or betaine—homocysteine S-methyltransferase.)
Methionine biosynthesis is subject to tight regulation. The repressor protein MetJ, in cooperation with the corepressor protein S-adenosyl-methionine, mediates the repression of methionine's biosynthesis. The regulator MetR is required for MetE and MetH gene expression and functions as a transactivator of transcription for these genes. MetR transcriptional activity is regulated by homocystein, which is the metabolic precursor of methionine. It is also known that vitamin B12 can repress MetE gene expression, which is mediated by the MetH holoenzyme.
Threonine
In plants and microorganisms, threonine is synthesized from aspartic acid via α-aspartyl-semialdehyde and homoserine. Homoserine undergoes O-phosphorylation; this phosphate ester undergoes hydrolysis concomitant with relocation of the OH group. Enzymes involved in a typical biosynthesis of threonine include aspartokinase, β-aspartate semialdehyde dehydrogenase, homoserine dehydrogenase, homoserine kinase, threonine synthase.
The biosynthesis of threonine is regulated via allosteric regulation of its precursor, homoserine, by structurally altering the enzyme homoserine dehydrogenase. This reaction occurs at a key branch point in the pathway, with the substrate homoserine serving as the precursor for the biosynthesis of lysine, methionine, threonin and isoleucine. High levels of threonine result in low levels of homoserine synthesis. The synthesis of aspartate kinase (AK), which catalyzes the phosphorylation of aspartate and initiates its conversion into other amino acids, is feed-back inhibited by lysine, isoleucine, and threonine, which prevents the synthesis of the amino acids derived from aspartate. So, in addition to inhibiting the first enzyme of the aspartate families biosynthetic pathway, threonine also inhibits the activity of the first enzyme after the branch point, i.e. the enzyme that is specific for threonine's own synthesis.
Isoleucine
In plants and microorganisms, isoleucine is biosynthesized from pyruvic acid and alpha-ketoglutarate. Enzymes involved in this biosynthesis include acetolactate synthase (also known as acetohydroxy acid synthase), acetohydroxy acid isomeroreductase, dihydroxyacid dehydratase, and valine aminotransferase.
In terms of regulation, the enzymes threonine deaminase, dihydroxy acid dehydrase, and transaminase are controlled by end-product regulation. i.e. the presence of isoleucine will downregulate threonine biosynthesis. High concentrations of isoleucine also result in the downregulation of aspartate's conversion into the aspartyl-phosphate intermediate, hence halting further biosynthesis of lysine, methionine, threonine, and isoleucine.
Ribose 5-phosphates: histidine
In E. coli, the biosynthesis begins with phosphorylation of 5-phosphoribosyl-pyrophosphate (PRPP), catalyzed by ATP-phosphoribosyl transferase. Phosphoribosyl-ATP converts to phosphoribosyl-AMP (PRAMP). His4 then catalyzes the formation of phosphoribosylformiminoAICAR-phosphate, which is then converted to phosphoribulosylformimino-AICAR-P by the His6 gene product. His7 splits phosphoribulosylformimino-AICAR-P to form D-erythro-imidazole-glycerol-phosphate. After, His3 forms imidazole acetol-phosphate releasing water. His5 then makes L-histidinol-phosphate, which is then hydrolyzed by His2 making histidinol. His4 catalyzes the oxidation of L-histidinol to form L-histidinal, an amino aldehyde. In the last step, L-histidinal is converted to L-histidine.
In general, the histidine biosynthesis is very similar in plants and microorganisms.
HisG → HisE/HisI → HisA → HisH → HisF → HisB → HisC → HisB → HisD (HisE/I and HisB are both bifunctional enzymes)
The enzymes are coded for on the His operon. This operon has a distinct block of the leader sequence, called block 1:
Met-Thr-Arg-Val-Gln-Phe-Lys-His-His-His-His-His-His-His-Pro-Asp
This leader sequence is important for the regulation of histidine in E. coli. The His operon operates under a system of coordinated regulation where all the gene products will be repressed or depressed equally. The main factor in the repression or derepression of histidine synthesis is the concentration of histidine charged tRNAs. The regulation of histidine is actually quite simple considering the complexity of its biosynthesis pathway and, it closely resembles regulation of tryptophan. In this system the full leader sequence has 4 blocks of complementary strands that can form hairpin loops structures. Block one, shown above, is the key to regulation. When histidine charged tRNA levels are low in the cell the ribosome will stall at the string of His residues in block 1. This stalling of the ribosome will allow complementary strands 2 and 3 to form a hairpin loop. The loop formed by strands 2 and 3 forms an anti-terminator and translation of the his genes will continue and histidine will be produced. However, when histidine charged tRNA levels are high the ribosome will not stall at block 1, this will not allow strands 2 and 3 to form a hairpin. Instead strands 3 and 4 will form a hairpin loop further downstream of the ribosome. The hairpin loop formed by strands 3 and 4 is a terminating loop, when the ribosome comes into contact with the loop, it will be “knocked off” the transcript. When the ribosome is removed the His genes will not be translated and histidine will not be produced by the cell.
3-Phosphoglycerates: serine, glycine, cysteine
The 3-Phosphoglycerate family of amino acids includes serine, glycine, and cysteine.
Serine
Serine is the first amino acid in this family to be produced; it is then modified to produce both glycine and cysteine (and many other biologically important molecules). Serine is formed from 3-phosphoglycerate in the following pathway:
3-phosphoglycerate → phosphohydroxyl-pyruvate → phosphoserine → serine
The conversion from 3-phosphoglycerate to phosphohydroxyl-pyruvate is achieved by the enzyme phosphoglycerate dehydrogenase. This enzyme is the key regulatory step in this pathway. Phosphoglycerate dehydrogenase is regulated by the concentration of serine in the cell. At high concentrations this enzyme will be inactive and serine will not be produced. At low concentrations of serine the enzyme will be fully active and serine will be produced by the bacterium. Since serine is the first amino acid produced in this family both glycine and cysteine will be regulated by the available concentration of serine in the cell.
Glycine
Glycine is biosynthesized from serine, catalyzed by serine hydroxymethyltransferase (SHMT). The enzyme effectively replaces a hydroxymethyl group with a hydrogen atom.
SHMT is coded by the gene glyA. The regulation of glyA is complex and is known to incorporate serine, glycine, methionine, purines, thymine, and folates, The full mechanism has yet to be elucidated. The methionine gene product MetR and the methionine intermediate homocysteine are known to positively regulate glyA. Homocysteine is a coactivator of glyA and must act in concert with MetR. On the other hand, PurR, a protein which plays a role in purine synthesis and S-adeno-sylmethionine are known to down regulate glyA. PurR binds directly to the control region of glyA and effectively turns the gene off so that glycine will not be produced by the bacterium.
Cysteine
The genes required for the synthesis of cysteine are coded for on the cys regulon. The integration of sulfur is positively regulated by CysB. Effective inducers of this regulon are N-acetyl-serine (NAS) and very small amounts of reduced sulfur. CysB functions by binding to DNA half sites on the cys regulon. These half sites differ in quantity and arrangement depending on the promoter of interest. There is however one half site that is conserved. It lies just upstream of the -35 site of the promoter. There are also multiple accessory sites depending on the promoter. In the absence of the inducer, NAS, CysB will bind the DNA and cover many of the accessory half sites. Without the accessory half sites the regulon cannot be transcribed and cysteine will not be produced. It is believed that the presence of NAS causes CysB to undergo a conformational change. This conformational change allows CysB to bind properly to all the half sites and causes the recruitment of the RNA polymerase. The RNA polymerase will then transcribe the cys regulon and cysteine will be produced.
Further regulation is required for this pathway, however. CysB can down regulate its own transcription by binding to its own DNA sequence and blocking the RNA polymerase. In this case NAS will act to disallow the binding of CysB to its own DNA sequence. OAS is a precursor of NAS, cysteine itself can inhibit CysE which functions to create OAS. Without the necessary OAS, NAS will not be produced and cysteine will not be produced. There are two other negative regulators of cysteine. These are the molecules sulfide and thiosulfate, they act to bind to CysB and they compete with NAS for the binding of CysB.
Pyruvate: alanine, valine, and leucine
Pyruvate, the result of glycolysis, can feed into both the TCA cycle and fermentation processes. Reactions beginning with either one or two molecules of pyruvate lead to the synthesis of alanine, valine, and leucine. Feedback inhibition of final products is the main method of inhibition, and, in E. coli, the ilvEDA operon also plays a part in this regulation.
Alanine
Alanine is produced by the transamination of one molecule of pyruvate using two alternate steps: 1) conversion of glutamate to α-ketoglutarate using a glutamate-alanine transaminase, and 2) conversion of valine to α-ketoisovalerate via Transaminase C.
Not much is known about the regulation of alanine synthesis. The only definite method is the bacterium's ability to repress Transaminase C activity by either valine or leucine (see ilvEDA operon). Other than that, alanine biosynthesis does not seem to be regulated.
Valine
Valine is produced by a four-enzyme pathway. It begins with the condensation of two equivalents of pyruvate catalyzed by acetohydroxy acid synthase yielding α-acetolactate. The second step involves the NADPH+-dependent reduction of α-acetolactate and migration of methyl groups to produce α, β-dihydroxyisovalerate. This is catalyzed by acetohydroxy isomeroreductase. The third step is the dehydration of α, β-dihydroxyisovalerate catalyzed by dihydroxy acid dehydrase. In the fourth and final step, the resulting α-ketoisovalerate undergoes transamination catalyzed either by an alanine-valine transaminase or a glutamate-valine transaminase. Valine biosynthesis is subject to feedback inhibition in the production of acetohydroxy acid synthase.
Leucine
The leucine synthesis pathway diverges from the valine pathway beginning with α-ketoisovalerate. α-Isopropylmalate synthase catalyzes this condensation with acetyl CoA to produce α-isopropylmalate. An isomerase converts α-isopropylmalate to β-isopropylmalate. The third step is the NAD+-dependent oxidation of β-isopropylmalate catalyzed by a dehydrogenase. The final step is the transamination of the α-ketoisocaproate by the action of a glutamate-leucine transaminase.
Leucine, like valine, regulates the first step of its pathway by inhibiting the action of the α-Isopropylmalate synthase. Because leucine is synthesized by a diversion from the valine synthetic pathway, the feedback inhibition of valine on its pathway also can inhibit the synthesis of leucine.
ilvEDA operon
The genes that encode both the dihydroxy acid dehydrase used in the creation of α-ketoisovalerate and Transaminase E, as well as other enzymes are encoded on the ilvEDA operon. This operon is bound and inactivated by valine, leucine, and isoleucine. (Isoleucine is not a direct derivative of pyruvate, but is produced by the use of many of the same enzymes used to produce valine and, indirectly, leucine.) When one of these amino acids is limited, the gene furthest from the amino-acid binding site of this operon can be transcribed. When a second of these amino acids is limited, the next-closest gene to the binding site can be transcribed, and so forth.
Commercial syntheses of amino acids
The commercial production of amino acids usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Some amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in the industrial synthesis of L-cysteine for example. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase.
References
External links
NCBI Bookshelf Free Textbook Access
Metabolism | Amino acid synthesis | [
"Chemistry",
"Biology"
] | 5,866 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
7,663,519 | https://en.wikipedia.org/wiki/Generalized%20Maxwell%20model | The generalized Maxwell model also known as the Maxwell–Wiechert model (after James Clerk Maxwell and E Wiechert) is the most general form of the linear model for viscoelasticity. In this model, several Maxwell elements are assembled in parallel. It takes into account that the relaxation does not occur at a single time, but in a set of times. Due to the presence of molecular segments of different lengths, with shorter ones contributing less than longer ones, there is a varying time distribution. The Wiechert model shows this by having as many spring–dashpot Maxwell elements as are necessary to accurately represent the distribution. The figure on the right shows the generalised Wiechert model.
General model form
Solids
Given elements with moduli , viscosities , and relaxation times
The general form for the model for solids is given by :
Example: standard linear solid model
Following the above model with elements yields the standard linear solid model:
Fluids
Given elements with moduli , viscosities , and relaxation times
The general form for the model for fluids is given by:
Example: three parameter fluid
The analogous model to the standard linear solid model is the three parameter fluid, also known as the Jeffreys model:
References
Materials science
Non-Newtonian fluids
James Clerk Maxwell | Generalized Maxwell model | [
"Physics",
"Materials_science",
"Engineering"
] | 256 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
7,664,801 | https://en.wikipedia.org/wiki/3M | 3M Company (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul, Minnesota, suburb of Maplewood, the company produces over 60,000 products, including adhesives, abrasives, laminates, passive fire protection, personal protective equipment, window films, paint protection film, electrical, electronic connecting, insulating materials, car-care products, electronic circuits, and optical films. Among its best-known consumer brands are Scotch Tape, Scotchguard surface protectants, Post-it notes, and Nexcare adhesive bandages. 3M’s stock ticker symbol is MMM and is listed on the New York Stock Exchange, Inc. (NYSE), the Chicago Stock Exchange, Inc., and the SIX Swiss Exchange.
3M made $35.4 billion in total sales in 2021 and ranked number 102 in the Fortune 500 list of the largest United States corporations by total revenue. , the company had approximately 95,000 employees and operations in more than 70 countries. There are a few international subsidiaries, such as 3M India, 3M Japan, and 3M Canada.
In June 2023, 3M reached a settlement to pay more than $10 billion to US public water systems to resolve claims over the company's contamination of water with PFASs (so-called forever chemicals). It has been revealed that the company knew of the health harms of PFAS in the 1990s, yet concealed these harms and continues to sell contaminated products.
History
Five businessmen founded the Minnesota Mining and Manufacturing Company as a mining venture in Two Harbors, Minnesota, making their first sale on June 13, 1902. The goal was to mine corundum, a crystalline form of aluminium oxide, which failed because the mine's mineral holdings were anorthosite, a feldspar which had no commercial value. Co-founder John Dwan solicited funds in exchange for stock and Edgar Ober and Lucius Ordway took over the company in 1905. The company moved to Duluth and began researching and producing sandpaper products. William L. McKnight, later a key executive, joined the company in 1907, and A. G. Bush joined in 1909. 3M finally became financially stable in 1916 and was able to pay dividends.
The company moved to Saint Paul in 1910, where it remained for 52 years before outgrowing the campus and moving to its current headquarters at 3M Center in Maplewood, Minnesota, in 1962.
In 1947, 3M began producing perfluorooctanoic acid (PFOA), an industrial surfactant and chemical feedstock, by electrochemical fluorination. In 1951, DuPont purchased PFOA from then-Minnesota Mining and Manufacturing Company for use in the manufacturing of teflon, a product that brought DuPont a billion-dollar-a-year profit by the 1990s. DuPont referred to PFOA as C8. The original formula for Scotchgard, a water repellent applied to fabrics, was discovered accidentally in 1952 by 3M chemists Patsy Sherman and Samuel Smith. Sales began in 1956, and in 1973 the two chemists received a patent for the formula.
In the late 1950s, 3M produced the first asthma inhaler, but the company did not enter the pharmaceutical industry until the mid-1960s with the acquisition of Riker Laboratories, moving it from California to Minnesota. 3M retained the Riker Laboratories name for the subsidiary until at least 1985. In the mid-1990s, 3M Pharmaceuticals, as the division came to be called, produced the first CFC-free asthma inhaler in response to adoption of the Montreal Protocol by the United States. In the 1980s and 1990s, the company spent fifteen years developing a topical cream delivery technology which led in 1997 to health authority approval and marketing of a symptomatic treatment for genital warts, Aldara. 3M divested its pharmaceutical unit through three deals in 2006, netting more than . At the time, 3M Pharmaceuticals comprised about 20% of 3M's healthcare business and employed just over a thousand people.
By the 1970s, 3M developed a theatrical blood formula based on red colorfast microbeads suspended in a carrier liquid. This stage blood was sold as Nextel Simulated Blood and was used during the production of the 1978 film Dawn of the Dead. It has since been discontinued.
In the late 1970s, 3M Mincom was involved in some of the first digital audio recordings to see commercial release when a prototype machine was brought to the Sound 80 studios in Minneapolis. In 1979 3M introduced a digital audio recording system called "3M Digital Audio Mastering System".
3M launched "Press 'n Peel" a sticky bookmark page holder in stores in four cities in 1977, but the results were disappointing. A year later 3M instead issued free samples of it as a sticky note directly to consumers in Boise, Idaho, with 95% of those who tried them indicating they would buy the product. The product was sold as "Post-Its" in 1979 when the rollout introduction began, and was sold across the United States from April 6, 1980. The following year they were launched in Canada and Europe.
In 1980, the company acquired Comtal, a manufacturer of digital image processors.
21st century
On April 8, 2002, 3M's 100th anniversary, the company changed its legal name to "3M Company". On September 8, 2008, 3M announced an agreement to acquire Meguiar's, a car-care products company that was family-owned for over a century. In August 2010, 3M acquired Cogent Systems for $943 million, and on October 13, 2010, 3M completed acquisition of Arizant Inc. In December 2011, 3M completed the acquisition of the Winterthur Technology Group, a bonded abrasives company.
In 2011 by 3M created CloudLibrary as part of its library systems unit as a competitor to OverDrive, Inc.; in 2015 3M sold the North American part of that unit to Bibliotheca Group GmbH, a company founded in 2011 that was funded by One Equity Partners Capital Advisors, a division of JP Morgan Chase.
As of 2012, 3M was one of the 30 companies included in the Dow Jones Industrial Average, added on August 9, 1976, and was 97 on the 2011 Fortune 500 list. On January 3, 2012, it was announced that the Office and Consumer Products Division of Avery Dennison was being bought by 3M for $550 million. The transaction was canceled by 3M in September 2012 amid antitrust concerns.
In May 2013, 3M sold Scientific Anglers and Ross Reels to Orvis. Ross Reels had been acquired by 3M in 2010.
In March 2017, 3M purchased Johnson Controls International Plc's safety gear business, Scott Safety, for $2 billion.
In 2017, 3M had net sales for the year of $31.657 billion, up from $30.109 billion the year before. In 2018, it was reported that the company would pay $850 million to end the Minnesota water pollution case concerning perfluorochemicals.
On May 25, 2018, Michael F. Roman was appointed CEO by the board of directors. On December 19, 2018, 3M announced it had entered into a definitive agreement to acquire the technology business of M*Modal, for a total enterprise value of $1.0 billion.
In October 2019, 3M purchased Acelity and its KCI subsidiaries for $6.7 billion, including assumption of debt and other adjustments.
On May 1, 2020, 3M divested substantially all of its drug delivery business to an affiliate of Altaris Capital Partners, LLC. for approximately $650 million, including a 17% interest in the new operating company, Kindeva Drug Delivery.
In December 2021, 3M announced that it would merge its food-safety business with food testing and animal healthcare products maker Neogen. The deal, with an enterprise value of about $5.3 billion, closed in September 2022.
In July 2022, the company announced it would spin off its healthcare assets to form a new, independent firm, likely completing the transaction in 2023. 3M will retain an ownership stake of 19.9% in the new, publicly-traded health care company and gradually divest the holdings. The company will be known as Solventum Corporation.
In December 2022, the company announced plans to stop producing and using so-called forever chemicals (per and polyfluoroalkyl), which have been commonly used in items such as food packaging, cellphones, nonstick pans, firefighting foams, and clothing. These chemicals are well known for their water-resistant and nonstick properties, but they are also dangerous pollutants that are linked to serious health problems, including ulcerative colitis and cancer. The move comes as governments in the Netherlands and the United States consider actions against 3M.
In March 2024, 3M announced the appointment of William "Bill" Brown as chief executive officer to take effect on May 1, 2024. Michael Roman would remain in the role of executive chairman. Brown, 61, is the former chairman of the board and chief executive officer of L3Harris Technologies.
Products and patents
As of 2019, 3M produces approximately 60,000 products, and has four business groups focused on safety and industrial, transportation and electronics, health care, and consumer products. 3M obtained its first patent in 1924 and acquires approximately 3,000 new patents annually. The company surpassed the 100,000-patent threshold in 2014.
Environmental record
3M's Pollution Prevention Pays (3P) program was established in 1975. The program initially focused on pollution reduction at the plant level and was expanded to promote recycling and reduce waste across all divisions in 1989. By the early 1990s, approximately 2,500 3P projects decreased the company's total global pollutant generation by 50 percent and saved 3M $500–600 million by eliminating the production of waste requiring subsequent treatment.
In 1983, the Oakdale Dump in Oakdale, Minnesota, was listed as an EPA Superfund site after significant groundwater and soil contamination by VOCs and heavy metals was uncovered. The Oakdale Dump was a 3M dumping site utilized through the 1940s and 1950s.
During the 1990s and 2000s, 3M reduced releases of toxic pollutants by 99 percent and greenhouse gas emissions by 72 percent. As of 2012, the United States Environmental Protection Agency (EPA) had awarded 3M with the Energy Star Award each year that it has been presented.
"Forever chemicals" water pollution
In 1999, the EPA began investigating perfluorinated chemicals after receiving data on the global distribution and toxicity of perfluorooctanesulfonic acid (PFOS). These materials are part of a broad group of perfluoroalkyl and polyfluoroalkyl substances often referred to as PFAS, each of which has different chemical properties. 3M, the former primary producer of PFOS from the U.S., announced the phase-out of PFOS, perfluorooctanoic acid, and PFOS-related product production in May 2000. Perfluorinated compounds produced by 3M have been used in non-stick cookware, stain-resistant fabrics, and other products.
The Cottage Grove facility manufactured PFAS from the 1940s to 2002. In response to PFAS contamination of the Mississippi River and surrounding area, 3M stated the area will be "cleaned through a combination of groundwater pump-out wells and soil sediment excavation". The restoration plan was based on an analysis of the company property and surrounding lands. The on-site water treatment facility that handled the plant's post-production water was not capable of removing PFAS, which were released into the nearby Mississippi River. The clean-up cost estimate, which included a granular activated carbon system to remove PFAS from the ground water was $50 to $56 million, funded from a $147 million environmental reserve set aside in 2006.
In 2008, 3M created the Renewable Energy Division within 3M's Industrial and Transportation Business to focus on Energy Generation and Energy Management.
In late 2010, the state of Minnesota sued 3M for 5 billion in punitive damages, claiming they released PFCs—classified a toxic chemical by the EPA—into local waterways. A settlement for $850 million was reached in February 2018. In 2019, 3M, along with the Chemours Company and DuPont, appeared before lawmakers to deny responsibility, with company Senior VP of Corporate Affairs Denise Rutherford arguing that the chemicals pose no human health threats at current levels and that there were no victims.
In 2021, research had determined that 3M's Zwijndrecht (Belgium) factory caused PFOS pollution that may be contaminating agricultural products within a 15 kilometer radius of the plant which includes Antwerp. The Flemish Government has paid 63 million euros for cleanup costs so far with 3M contributing 75,000 euros. The Flemish Government issued measures advising against the consumption of, for example, home-grown eggs within a radius of 5 kilometers.
In 2023, 3M reached an agreement to pay a $10.3bn settlement with numerous US public water systems to resolve thousands of lawsuits over PFAS contamination.
Carbon footprint
3M reported Total CO2e emissions (Direct + Indirect) for the twelve months ending December 31, 2020, at 5,280 Kt (-550 /-9.4% y-o-y) and plans to reduce emissions 50% by 2030 from a 2019 base year. The company also aims achieve carbon neutrality by 2050.
Earplug controversy
The Combat Arms Earplugs, Version 2 (CAEv2), was developed by Aearo Technologies for U.S. military and civilian use. The CAEv2 was a double ended earplug that 3M claimed would offer users different levels of protection. Between 2003 and 2015, these earplugs were standard issue to members of the U.S. military. 3M acquired Aearo Technologies in 2008.
In May 2016, Moldex-Metric, Inc., a 3M competitor, filed a whistleblower complaint against 3M under the False Claims Act. Moldex-Metric claimed that 3M made false claims to the U.S. government about the safety of its earplugs and that it knew the earplugs had an inherently defective design. In 2018, 3M agreed to pay $9.1 million to the U.S. government to resolve the allegations, without admitting liability.
Since 2018, more than 140,000 former users of the earplugs (primarily U.S. military veterans) have filed suit against 3M claiming they suffer from hearing loss, tinnitus, and other damage as a consequence of the defective design.
Internal emails showed that 3M officials boasted about charging $7.63 per piece for the earplugs which cost 85 cents to produce. The company's official response indicated that the cost to the government includes R&D costs.
3M settled close to 260,000 lawsuits in August 2023 by agreeing to pay $6 billion to current and former U.S. military members who were affected.
N95 respirators and the COVID-19 pandemic
The N95 respirator mask was developed by 3M and approved in 1972. Due to its ability to filter viral particulates, its use was recommended during the COVID-19 pandemic but supply soon became short. Much of the company's supply had already been sold prior to the outbreak.
The shortages led to the U.S. government asking 3M to stop exporting US-made N95 respirator masks to Canada and to Latin American countries, and President Donald Trump invoked the Defense Production Act to require 3M to prioritize orders from the federal government. The dispute was resolved when 3M agreed to import more respirators, mostly from its factories in China.
3M later struck a CA$70M deal with the federal government of Canada and the Ontario provincial government to produce N95 masks at their plant in Brockville, Ontario.
Operating facilities
3M's general offices, corporate research laboratories, and some division laboratories in the U.S. are in St. Paul, Minnesota. In the United States, 3M operates 80 manufacturing facilities in 29 states, and 125 manufacturing and converting facilities in 37 countries outside the U.S. (in 2017).
During March 2016, 3M completed a research-and-development building on its Maplewood campus that cost $150 million. Seven hundred scientists from various divisions occupy the building. They were previously scattered across the campus. 3M hopes concentrating its research and development in this manner will improve collaboration. 3M received $9.6 million in local tax increment financing and relief from state sales taxes in order to assist with development of the building.
Selected factory detail information:
Cynthiana, Kentucky, U.S. factory producing Post-it Notes (672 SKU) and Scotch Tape (147 SKU). It has 539 employees and was established in 1969.
Newton Aycliffe, County Durham, UK factory producing respirators for workers safety using laser technology. It has 370 employees.
In Minnesota, 3M's Hutchinson facility produces products for more than half of the company's 23 divisions, as of 2019. The "super hub" has manufactured adhesive bandages for Nexcare, furnace filters, and Scotch Tape, among other products. The Cottage Grove plant is one of three operated by 3M for the production of pad conditioners, as of 2011.
3M has operated a manufacturing plant in Columbia, Missouri since 1970. The plant has been used for the production of products including electronic components solar and touchscreen films, and stethoscopes. The facility received a $20 million expansion in 2012 and has approximately 400 employees.
3M opened the Brookings, South Dakota plant in 1971, and announced a $70 million expansion in 2014. The facility manufactures more than 1,700 health care products and employs 1,100 people, as of 2018, making the plant 3M's largest focused on health care. Mask production at the site increased during the 2009 swine flu pandemic, 2002–2004 SARS outbreak, 2018 California wildfires, 2019–20 Australian bushfire season, and COVID-19 pandemic.
3M's Springfield, Missouri plant opened in 1967 and makes industrial adhesives and tapes for aerospace manufacturers. In 2017, 3M had approximately 330 employees in the metropolitan area, and announced a $40 million expansion project to upgrade the facility and redevelop another building.
In Iowa, the Ames plant makes sandpaper products and received funding from the Iowa Economic Development Authority (IEDA) for expansions in 2013 and 2018. The Knoxville plant is among 3M's largest and produces approximately 12,000 different products, including adhesives and tapes.
3M's Southeast Asian operations are based in Singapore, where the company has invested $1 billion over 50 years. 3M has a facility in Tuas, a manufacturing plant and Smart Urban Solutions lab in Woodlands, and a customer technical center in Yishun. 3M expanded a factory in Woodlands in 2011, announced a major expansion of the Tuas plant in 2016, and opened new headquarters in Singapore featuring a Customer Technical Centre in 2018.
The company has operated in China since 1984, and was Shanghai's first Wholly Foreign-Owned Enterprise. 3M's seventh plant, and the first dedicated to health care product production, opened in Shanghai in 2007. By October 2007, the company had opened an eighth manufacturing plant and technology center in Guangzhou. 3M broke ground on its ninth manufacturing facility, for the production of photovoltaics and other renewable energy products, in Hefei in 2011. 3M announced plans to construct a technology innovation center in Chengdu in 2015, and opened a fifth design center in Shanghai in 2019.
Leadership
Board chairs have included: William L. McKnight (1949–1966), Bert S. Cross (1966–1970), Harry Heltzer (1970–1975), Raymond H. Herzog (1975–1980), Lewis W. Lehr (1980–1986), Allen F. Jacobson (1986–1991), Livio DeSimone (1991–2001), James McNerney (2001–2005), George W. Buckley (2005–2012), and Inge Thulin (2012–2018). Thulin continued as executive chairman until Michael F. Roman was appointed in 2019.
3M's CEOs have included: Cross (1966–1970), Heltzer (1970–1975), Herzog (1975–1979), Lehr (1979–1986), Jacobson (1986–1991), DeSimone (1991–2001), McNerney (2001–2005), Robert S. Morrison (2005, interim), Buckley (2005–2012), Thulin (2012–2018), and Roman (2018–present).
3M's presidents have included: Edgar B. Ober (1905–1929), McKnight (1929–1949), Richard P. Carlton (1949–1953), Herbert P. Buetow (1953–1963), Cross (1963–1966), Heltzer (1966–1970), and Herzog (1970–1975). In the late 1970s, the position was separated into roles for U.S. and international operations. The position overseeing domestic operations was first held by Lehr, followed by John Pitblado from 1979 to 1981, then Jacobson from 1984 to 1991. James A. Thwaits led international operations starting in 1979. Buckley and Thulin were president during 2005–2012, and 2012–2018, respectively.
See also
Oakdale Dump
Further reading
V. Huck, Brand of the Tartan: The 3M Story, Appleton-Century-Crofts, 1955. Early history of 3M and challenges, includes employee profiles.
C. Rimington, From Minnesota mining and manufacturing to 3M Australia Pty Ltd (3M Australia: the Story of an Innovative Company), Sid Harta Publishers, 2013. Recollections from 3M Australia employees in context of broader organisational history.
Sharon Lerner "How 3M Discovered, Then Concealed, the Dangers of Forever Chemicals", New Yorker Magazine
References
External links
Historical records of the 3M Company are available for research use at the Minnesota Historical Society
1902 establishments in Minnesota
1940s initial public offerings
American companies established in 1902
Companies listed on the New York Stock Exchange
Conglomerate companies established in 1902
Conglomerate companies of the United States
Companies in the Dow Jones Industrial Average
Companies in the Dow Jones Global Titans 50
Companies in the S&P 500 Dividend Aristocrats
Manufacturing companies based in Minnesota
Manufacturing companies established in 1902
Multinational companies headquartered in the United States
National Medal of Technology recipients
Nanotechnology companies
Office supply companies of the United States
Ramsey County, Minnesota
Renewable energy technology companies | 3M | [
"Materials_science"
] | 4,768 | [
"Nanotechnology",
"Nanotechnology companies"
] |
7,664,887 | https://en.wikipedia.org/wiki/Fosmid | Fosmids are similar to cosmids but are based on the bacterial F-plasmid. The cloning vector is limited, as a host (usually E. coli) can only contain one fosmid molecule. Fosmids can hold DNA inserts of up to 40 kb in size; often the source of the insert is random genomic DNA. A fosmid library is prepared by extracting the genomic DNA from the target organism and cloning it into the fosmid vector. The ligation mix is then packaged into phage particles and the DNA is transfected into the bacterial host. Bacterial clones propagate the fosmid library.
The low copy number offers higher stability than vectors with relatively higher copy numbers, including cosmids. Fosmids may be useful for constructing stable libraries from complex genomes. Fosmids have high structural stability and have been found to maintain human DNA effectively even after 100 generations of bacterial growth. Fosmid clones were used to help assess the accuracy of the Public Human Genome Sequence.
Discovery
The fertility plasmid or F-plasmid was discovered by Esther Lederberg and encodes information for the biosynthesis of sex pilus to aid in bacterial conjugation. Conjugation involves using the sex pilus to form a bridge between two bacteria cells; this bridge allows the F+ cell to transfer a single-stranded copy of the plasmid so that both cells contain a copy of the plasmid. On the way into the recipient cell, the corresponding DNA strand is synthesized by the recipient. The donor cell maintains a functional copy of the plasmid. It later was discovered that the F factor was the first episome and can exist as an independent plasmid making it a very stable vector for cloning. Conjugation aids in the formation of bacterial clone libraries by ensuring all cells contain the desired fosmid.
Fosmids are DNA vectors that use the F-plasmid origin of replication and partitioning mechanisms to allow cloning of large DNA fragments. A library that provides 20–70-fold redundant coverage of the genome can easily be prepared.
DNA libraries
The first step in sequencing entire genomes is cloning the genome into manageable units of some 50-200 kilobases in length. It is ideal to use a fosmid library because of its stability and limitation of one plasmid per cell. By limiting the number of plasmids in the cells the potential for recombination is decreased, thus preserving the genome insert.
Fosmids contain several functional elements:
OriT (Origin of Transfer): The sequence which marks the starting point of conjugative transfer.
OriV (Origin of Replication): The sequence starting with which the plasmid-DNA will be replicated in the recipient cell.
tra-region (transfer genes): Genes coding the F-Pilus and DNA transfer process.
IS (Insertion Elements): so-called "selfish genes" (sequence fragments which can integrate copies of themselves at different locations).
The methods of cutting and inserting DNA into fosmid vectors have been perfected. There are now many companies that can create a fosmid library from any sample of DNA in a very short period of time at a relatively low cost. This has been vital in allowing researchers to sequence numerous genomes for study. Through a variety of methods, more than 6651 organisms genomes have been fully sequenced, with 58,695 ongoing.
Uses
Sometimes it is difficult to accurately distinguish individual chromosomes based on chromosome length, arm ratio, and C-banding pattern. Fosmids can be used as reliable cytological markers for individual chromosome identification and fluorescent in situ hybridization based metaphase chromosome karyotypes can be used to show whether the positions of these fosmids were successfully constructed.
The fosmid system is excellent for rapidly creating chromosome-specific mini-BAC libraries from flow-sorted chromosomal DNA. The major advantage of Fosmids over other cosmid systems lies in its capability of stably propagating human DNA fragments. Highly repetitive in nature, human DNA is well known for its extreme instability in multicopy vector systems. It has been found that the stability increases dramatically when the human DNA inserts are present in single copies in recombination deficient E. coli cells. Therefore, Fosmids serve as reliable substrates for large scale genomic DNA sequencing.
References
External links
NCBI Nucleotide Database
Cloning
Genomics techniques
Laboratory techniques
Molecular biology techniques | Fosmid | [
"Chemistry",
"Engineering",
"Biology"
] | 966 | [
"Genetics techniques",
"Genomics techniques",
"Cloning",
"Genetic engineering",
"Molecular biology techniques",
"nan",
"Molecular biology"
] |
670,376 | https://en.wikipedia.org/wiki/Power-flow%20study | In power engineering, the power-flow study, or load-flow study, is a numerical analysis of the flow of electric power in an interconnected system. A power-flow study usually uses simplified notations such as a one-line diagram and per-unit system, and focuses on various aspects of AC power parameters, such as Voltage, voltage angles, real power and reactive power. It analyzes the power systems in normal steady-state operation.
Power-flow or load-flow studies are important for planning future expansion of power systems as well as in determining the best operation of existing systems. The principal information obtained from the power-flow study is the magnitude and phase angle of the voltage at each bus, and the real and reactive power flowing in each line.
Commercial power systems are usually too complex to allow for hand solution of the power flow. Special-purpose network analyzers were built between 1929 and the early 1960s to provide laboratory-scale physical models of power systems. Large-scale digital computers replaced the analog methods with numerical solutions.
In addition to a power-flow study, computer programs perform related calculations such as short-circuit fault analysis, stability studies (transient and steady-state), unit commitment and economic dispatch. In particular, some programs use linear programming to find the optimal power flow, the conditions which give the lowest cost per kilowatt hour delivered.
A load flow study is especially valuable for a system with multiple load centers, such as a refinery complex. The power-flow study is an analysis of the system’s capability to adequately supply the connected load. The total system losses, as well as individual line losses, also are tabulated. Transformer tap positions are selected to ensure the correct voltage at critical locations such as motor control centers. Performing a load-flow study on an existing system provides insight and recommendations as to the system operation and optimization of control settings to obtain maximum capacity while minimizing the operating costs. The results of such an analysis are in terms of active power, reactive power, voltage magnitude and phase angle. Furthermore, power-flow computations are crucial for optimal operations of groups of generating units.
In term of its approach to uncertainties, load-flow study can be divided to deterministic load flow and uncertainty-concerned load flow. Deterministic load-flow study does not take into account the uncertainties arising from both power generations and load behaviors. To take the uncertainties into consideration, there are several approaches that has been used such as probabilistic, possibilistic, information gap decision theory, robust optimization, and interval analysis.
Model
An alternating current power-flow model is a model used in electrical engineering to analyze power grids. It provides a nonlinear system of equations which describes the energy flow through each transmission line. The problem is non-linear because the power flow into load impedances is a function of the square of the applied voltages. Due to nonlinearity, in many cases the analysis of large network via AC power-flow model is not feasible, and a linear (but less accurate) DC power-flow model is used instead.
Usually analysis of a three-phase power system is simplified by assuming balanced loading of all three phases. Sinusoidal steady-state operation is assumed, with no transient changes in power flow or voltage due to load or generation changes, meaning all current and voltage waveforms are sinusoidal with no DC offset and have the same constant frequency. The previous assumption is the same as assuming the power system is linear time-invariant (even though the system of equations is nonlinear), driven by sinusoidal sources of same frequency, and operating in steady-state, which allows to use phasor analysis, another simplification. A further simplification is to use the per-unit system to represent all voltages, power flows, and impedances, scaling the actual target system values to some convenient base. A system one-line diagram is the basis to build a mathematical model of the generators, loads, buses, and transmission lines of the system, and their electrical impedances and ratings.
Power-flow problem formulation
The goal of a power-flow study is to obtain complete voltage angles and magnitude information for each bus in a power system for specified load and generator real power and voltage conditions. Once this information is known, real and reactive power flow on each branch as well as generator reactive power output can be analytically determined. Due to the nonlinear nature of this problem, numerical methods are employed to obtain a solution that is within an acceptable tolerance.
The solution to the power-flow problem begins with identifying the known and unknown variables in the system. The known and unknown variables are dependent on the type of bus. A bus without any generators connected to it is called a Load Bus. With one exception, a bus with at least one generator connected to it is called a Generator Bus. The exception is one arbitrarily-selected bus that has a generator. This bus is referred to as the slack bus.
In the power-flow problem, it is assumed that the real power and reactive power at each Load Bus are known. For this reason, Load Buses are also known as PQ Buses. For Generator Buses, it is assumed that the real power generated and the voltage magnitude is known. For the Slack Bus, it is assumed that the voltage magnitude and voltage phase are known. Therefore, for each Load Bus, both the voltage magnitude and angle are unknown and must be solved for; for each Generator Bus, the voltage angle must be solved for; there are no variables that must be solved for the Slack Bus. In a system with buses and generators, there are then unknowns.
In order to solve for the unknowns, there must be equations that do not introduce any new unknown variables. The possible equations to use are power balance equations, which can be written for real and reactive power for each bus.
The real power balance equation is:
where is the net active power injected at bus i, is the real part of the element in the bus admittance matrix YBUS corresponding to the row and column, is the imaginary part of the element in the YBUS corresponding to the row and column and is the difference in voltage angle between the and buses (). The reactive power balance equation is:
where is the net reactive power injected at bus i.
Equations included are the real and reactive power balance equations for each Load Bus and the real power balance equation for each Generator Bus. Only the real power balance equation is written for a Generator Bus because the net reactive power injected is assumed to be unknown and therefore including the reactive power balance equation would result in an additional unknown variable. For similar reasons, there are no equations written for the Slack Bus.
In many transmission systems, the impedance of the power network lines is primarily inductive, i.e. the phase angles of the power lines impedance are usually relatively large and very close to 90 degrees. There is thus a strong coupling between real power and voltage angle, and between reactive power and voltage magnitude, while the coupling between real power and voltage magnitude, as well as reactive power and voltage angle, is weak. As a result, real power is usually transmitted from the bus with higher voltage angle to the bus with lower voltage angle, and reactive power is usually transmitted from the bus with higher voltage magnitude to the bus with lower voltage magnitude. However, this approximation does not hold when the phase angle of the power line impedance is relatively small.
Newton–Raphson solution method
There are several different methods of solving the resulting nonlinear system of equations. The most popular is a variation of the Newton–Raphson method. The Newton-Raphson method is an iterative method which begins with initial guesses of all unknown variables (voltage magnitude and angles at Load Buses and voltage angles at Generator Buses). Next, a Taylor Series is written, with the higher order terms ignored, for each of the power balance equations included in the system of equations. The result is a linear system of equations that can be expressed as:
where and are called the mismatch equations:
and is a matrix of partial derivatives known as a Jacobian:
.
The linearized system of equations is solved to determine the next guess (m + 1) of voltage magnitude and angles based on:
The process continues until a stopping condition is met. A common stopping condition is to terminate if the norm of the mismatch equations is below a specified tolerance.
A rough outline of solution of the power-flow problem is:
Make an initial guess of all unknown voltage magnitudes and angles. It is common to use a "flat start" in which all voltage angles are set to zero and all voltage magnitudes are set to 1.0 p.u.
Solve the power balance equations using the most recent voltage angle and magnitude values.
Linearize the system around the most recent voltage angle and magnitude values
Solve for the change in voltage angle and magnitude
Update the voltage magnitude and angles
Check the stopping conditions, if met then terminate, else go to step 2.
Other power-flow methods
Gauss–Seidel method: This is the earliest devised method. It shows slower rates of convergence compared to other iterative methods, but it uses very little memory and does not need to solve a matrix system.
Fast-decoupled-load-flow method is a variation on Newton–Raphson that exploits the approximate decoupling of active and reactive flows in well-behaved power networks, and additionally fixes the value of the Jacobian during the iteration in order to avoid costly matrix decompositions. Also referred to as "fixed-slope, decoupled NR". Within the algorithm, the Jacobian matrix gets inverted only once, and there are three assumptions. Firstly, the conductance between the buses is zero. Secondly, the magnitude of the bus voltage is one per unit. Thirdly, the sine of phases between buses is zero. Fast decoupled load flow can return the answer within seconds whereas the Newton Raphson method takes much longer. This is useful for real-time management of power grids.
Holomorphic embedding load flow method: A recently developed method based on advanced techniques of complex analysis. It is direct and guarantees the calculation of the correct (operative) branch, out of the multiple solutions present in the power-flow equations.
Backward-Forward Sweep (BFS) method: A method developed to take advantage of the radial structure of most modern distribution grids. It involves choosing an initial voltage profile and separating the original system of equations of grid components into two separate systems and solving one, using the last results of the other, until convergence is achieved. Solving for the currents with the voltages given is called the backward sweep (BS) and solving for the voltages with the currents given is called the forward sweep (FS).
Laurent Power Flow (LPF) method: Power flow formulation that provides guarantee of uniqueness of solution and independence on initial conditions for electrical distribution systems. The LPF is based on the current injection method (CIM) and applies the Laurent series expansion. The main characteristics of this formulation are its proven numerical convergence and stability, and its computational advantages, showing to be at least ten times faster than the BFS method both in balanced and unbalanced networks. Since it is based on the system's admittance matrix, the formulation is able to consider radial and meshed network topologies without additional modifications (contrary to the compensation-based BFS). The simplicity and computational efficiency of the LPF method make it an attractive option for recursive power flow problems, such as those encountered in time-series analyses, metaheuristics, probabilistic analysis, reinforcement learning applied to power systems, and other related applications.
DC power-flow
Direct current load flow gives estimations of lines power flows on AC power systems. Direct current load flow looks only at active power flows and neglects reactive power flows. This method is non-iterative and absolutely convergent but less accurate than AC Load Flow solutions. Direct current load flow is used wherever repetitive and fast load flow estimations are required.
References
Electric power distribution
Power engineering | Power-flow study | [
"Engineering"
] | 2,473 | [
"Power engineering",
"Electrical engineering",
"Energy engineering"
] |
670,510 | https://en.wikipedia.org/wiki/Pump-jet | A pump-jet, hydrojet, or water jet is a marine system that produces a jet of water for propulsion. The mechanical arrangement may be a ducted propeller (axial-flow pump), a centrifugal pump, or a mixed flow pump which is a combination of both centrifugal and axial designs. The design also incorporates an intake to provide water to the pump and a nozzle to direct the flow of water out of the pump.
Design
A pump-jet works by having an intake (usually at the bottom of the hull) that allows water to pass underneath the vessel into the engines. Water enters the pump through this inlet. The pump can be of a centrifugal design for high speeds, or an axial flow pump for low to medium speeds. The water pressure inside the inlet is increased by the pump and forced backwards through a nozzle. With the use of a reversing bucket, reverse thrust can also be achieved for faring backwards, quickly and without the need to change gear or adjust engine thrust. The reversing bucket can also be used to help slow the ship down when braking. This feature is the main reason pump jets are so maneuverable.
The nozzle also provides the steering of the pump-jets. Plates, similar to rudders, can be attached to the nozzle in order to redirect the water flow port and starboard. In a way, this is similar to the principles of air thrust vectoring, a technique which has long been used in launch vehicles (rockets and missiles) then later in military jet-powered aircraft. This provides pumpjet-powered ships with superior agility at sea. Another advantage is that when faring backwards by using the reversing bucket, steering is not inverted, as opposed to propeller-powered ships.
Axial flow
An axial-flow waterjet's pressure is increased by diffusing the flow as it passes through the impeller blades and stator vanes. The pump nozzle then converts this pressure energy into velocity, thus producing thrust.
Axial-flow waterjets produce high volumes at lower velocity, making them well suited to larger low to medium speed craft, the exception being personal water craft, where the high water volumes create tremendous thrust and acceleration as well as high top speeds. But these craft also have high power-to-weight ratios compared to most marine craft. Axial-flow waterjets are by far the most common type of pump.
Mixed flow
Mixed-flow waterjet designs incorporate aspects of both axial flow and centrifugal flow pumps. Pressure is developed by both diffusion and radial outflow. Mixed flow designs produce lower volumes of water at high velocity making them suited for small to moderate craft sizes and higher speeds. Common uses include high speed pleasure craft and waterjets for shallow water river racing (see River Marathon).
Centrifugal flow
Centrifugal-flow waterjet designs make use of radial flow to create water pressure.
Examples of centrifugal designs are the Schottel Pump-Jet and outboard sterndrives.
Advantages
Pump jets have some advantages over bare propellers for certain applications, usually related to requirements for high-speed or shallow-draft operations. These include:
Higher speed before the onset of cavitation, because of the raised internal dynamic pressure
High power density (with respect to volume) of both the propulsor and the prime mover (because a smaller, higher-speed unit can be used)
Protection of the rotating element, making operation safer around swimmers and aquatic life
Improved shallow-water operations, because only the inlet needs to be submerged
Increased maneuverability, by adding a steerable nozzle to create vectored thrust
Noise reduction, resulting in a low sonar signature; this particular system has little in common with other pump-jet propulsors and is also known as "shrouded propeller configuration"; applications:
Warships designed for low observability, for example the Swedish .
Submarines, for example the Royal Navy and , the US Navy and , the French Navy and Barracuda class, and the Russian Navy .
Modern torpedoes, such as the Spearfish, the Mk 48 and Mk 50 weapons.
History
The water jet principle in shipping industry can be traced back to 1661 when Toogood and Hayes produced a description of a ship having a central water channel in which either a plunger or centrifugal pump was installed to provide the motive power.
On December 3, 1787, inventor James Rumsey demonstrated a water-jet propelled boat using a steam-powered pump to drive a stream of water from the stern. This occurred on the Potomac River at Shepherdstown, Virginia (now West Virginia) before a crowd of witnesses including General Horatio Gates. The 50-foot long boat traveled about one-half mile upriver before returning to the dock. The boat was reported to reach a speed of four mph moving upstream.
On December 21, 1833, Irish engineer John Howard Kyan received a UK patent for propelling ships by a jet of water ejected from the stern.
In April 1932, Italian engineer Secondo Campini demonstrated a pump-jet propelled boat in Venice, Italy. The boat achieved a top speed of , a speed comparable to a boat with a conventional engine of similar output. The Italian Navy, who had funded the development of the boat, placed no orders but did veto the sale of the design outside of Italy. The first modern jetboat was developed by New Zealand engineer Sir William Hamilton in the mid 1950s.
Uses
Pump-jets were once limited to high-speed pleasure craft (such as jet skis and jetboats) and other small vessels, but since 2000 the desire for high-speed vessels has increased and thus the pump-jet is gaining popularity on larger craft, military vessels and ferries. On these larger craft, they can be powered by diesel engines or gas turbines. Speeds of up to can be achieved with this configuration, even with a displacement hull.
Pump-jet powered ships are very maneuverable. Examples of ships using pumpjets are the s, the s, s, the Stena high-speed sea service ferries, the Royal Navy Swiftsure, Trafalgar and Astute-class submarines, as well as the United States Seawolf and Virginia-classes, and the Russian Borei-class submarines. They are also used by the United States littoral combat ships.
See also
Internal drive propulsion
Personal water craft
Wetbike
Kitchen rudder
Water rocket
Chain boat - Water turbines
Notes
References
Charles Dawson, "The Early History of the Water-jet Engine", "Industrial Heritage", Vol. 30, No 3, 2004, page 36.
David S. Yetman, "Without A Prop", DogEar Publishers, 2010
Marine propulsion
Jet engines | Pump-jet | [
"Technology",
"Engineering"
] | 1,367 | [
"Jet engines",
"Marine propulsion",
"Engines",
"Marine engineering"
] |
670,602 | https://en.wikipedia.org/wiki/Perfect%20graph%20theorem | In graph theory, the perfect graph theorem of states that an undirected graph is perfect if and only if its complement graph is also perfect. This result had been conjectured by , and it is sometimes called the weak perfect graph theorem to distinguish it from the strong perfect graph theorem characterizing perfect graphs by their forbidden induced subgraphs.
Statement
A perfect graph is an undirected graph with the property that, in every one of its induced subgraphs, the size of the largest clique equals the minimum number of colors in a coloring of the subgraph. Perfect graphs include many important graphs classes including bipartite graphs, chordal graphs, and comparability graphs.
The complement of a graph has an edge between two vertices if and only if the original graph does not have an edge between the same two vertices. Thus, a clique in the original graph becomes an independent set in the complement and a coloring of the original graph becomes a clique cover of the complement.
The perfect graph theorem states:
The complement of a perfect graph is perfect.
Equivalently, in a perfect graph, the size of the maximum independent set equals the minimum number of cliques in a clique cover.
Example
Let G be a cycle graph of odd length greater than three (a so-called "odd hole"). Then G requires at least three colors in any coloring, but has no triangle, so it is not perfect. By the perfect graph theorem, the complement of G (an "odd antihole") must therefore also not be perfect. If G is a cycle of five vertices, it is isomorphic to its complement, but this property is not true for longer odd cycles, and it is not as trivial to compute the clique number and chromatic number in an odd antihole as it is in an odd hole. As the strong perfect graph theorem states, the odd holes and odd antiholes turn out to be the minimal forbidden induced subgraphs for the perfect graphs.
Applications
In a nontrivial bipartite graph, the optimal number of colors is (by definition) two, and (since bipartite graphs are triangle-free) the maximum clique size is also two. Also, any induced subgraph of a bipartite graph remains bipartite. Therefore, bipartite graphs are perfect. In bipartite graphs, a minimum clique cover takes the form of a maximum matching together with an additional clique for every unmatched vertex, with size n − M, where M is the cardinality of the matching. Thus, in this case, the perfect graph theorem implies Kőnig's theorem that the size of a maximum independent set in a bipartite graph is also n − M, a result that was a major inspiration for Berge's formulation of the theory of perfect graphs.
Mirsky's theorem characterizing the height of a partially ordered set in terms of partitions into antichains can be formulated as the perfection of the comparability graph of the partially ordered set, and Dilworth's theorem characterizing the width of a partially ordered set in terms of partitions into chains can be formulated as the perfection of the complements of these graphs. Thus, the perfect graph theorem can be used to prove Dilworth's theorem from the (much easier) proof of Mirsky's theorem, or vice versa.
Lovász's proof
To prove the perfect graph theorem, Lovász used an operation of replacing vertices in a graph by cliques; it was already known to Berge that, if a graph is perfect, the graph formed by this replacement process is also perfect. Any such replacement process may be broken down into repeated steps of doubling a vertex. If the doubled vertex belongs to a maximum clique of the graph, it increases both the clique number and the chromatic number by one. If, on the other hand, the doubled vertex does not belong to a maximum clique, form a graph H by removing the vertices with the same color as the doubled vertex (but not the doubled vertex itself) from an optimal coloring of the given graph. The removed vertices meet every maximum clique, so H has clique number and chromatic number one less than that of the given graph. The removed vertices and the new copy of the doubled vertex can then be added back as a single color class, showing that in this case the doubling step leaves the chromatic number unchanged. The same argument shows that doubling preserves the equality of the clique number and the chromatic number in every induced subgraph of the given graph, so each doubling step preserves the perfection of the graph.
Given a perfect graph G, Lovász forms a graph G* by replacing each vertex v by a clique of tv vertices, where tv is the number of distinct maximum independent sets in G that contain v. It is possible to correspond each of the distinct maximum independent sets in G with one of the maximum independent sets in G*, in such a way that the chosen maximum independent sets in G* are all disjoint and each vertex of G* appears in a single chosen set; that is, G* has a coloring in which each color class is a maximum independent set. Necessarily, this coloring is an optimal coloring of G*. Because G is perfect, so is G*, and therefore it has a maximum clique K* whose size equals the number of colors in this coloring, which is the number of distinct maximum independent sets in G; necessarily, K* contains a distinct representative for each of these maximum independent sets. The corresponding set K of vertices in G (the vertices whose expanded cliques in G* intersect K*) is a clique in G with the property that it intersects every maximum independent set in G. Therefore, the graph formed from G by removing K has clique cover number at most one less than the clique number of G, and independence number at least one less than the independence number of G, and the result follows by induction on this number.
Relation to the strong perfect graph theorem
The strong perfect graph theorem of states that a graph is perfect if and only if none of its induced subgraphs are cycles of odd length greater than or equal to five, or their complements. Because this characterization is unaffected by graph complementation, it immediately implies the weak perfect graph theorem.
Generalizations
proved that, if the edges of a complete graph are partitioned into three subgraphs in such a way that every three vertices induce a connected graph in one of the three subgraphs, and if two of the subgraphs are perfect, then the third subgraph is also perfect. The perfect graph theorem is the special case of this result when one of the three subgraphs is the empty graph.
Notes
References
.
.
.
.
.
.
.
.
.
Theorems in graph theory
Articles containing proofs
Perfect graphs | Perfect graph theorem | [
"Mathematics"
] | 1,390 | [
"Articles containing proofs",
"Theorems in graph theory",
"Theorems in discrete mathematics"
] |
671,039 | https://en.wikipedia.org/wiki/Linoleum | Linoleum is a floor covering made from materials such as solidified linseed oil (linoxyn), pine resin, ground cork dust, sawdust, and mineral fillers such as calcium carbonate, most commonly on a burlap or canvas backing. Pigments are often added to the materials to create the desired color finish. Commercially, the material has been largely replaced by sheet vinyl flooring, although in the UK and Australia this is often still referred to as "lino".
The finest linoleum floors, known as "inlaid", are extremely durable, and are made by joining and inlaying solid pieces of linoleum. Cheaper patterned linoleum comes in different grades or gauges, and is printed with thinner layers which are more prone to wear and tear. High-quality linoleum is flexible and thus can be used in buildings where a more rigid material (such as ceramic tile) would crack.
Technology
Linoleum in essence consists of two components, a polymerizable organic compound and a collection of fillers, pigments, catalysts. The polymerizable precursors are rich in polyunsaturated fats, especially derivatives of linoleic acid and alpha-linolenic acid. Such fats are called drying oils because they "dry" (harden) upon exposure to the oxygen in air. The drying process results in cross-linking of the fat molecules. This crosslinking process is often slow, thus catalysts and heat are applied to give a durable material. During this crosslinking, fillers and pigments are mixed with the resin.
History
Linoleum was invented by Englishman Frederick Walton. In 1855, Walton happened to notice the rubbery, flexible skin of solidified linseed oil (linoxyn) that had formed on a can of oil-based paint and thought that it might form a substitute for India rubber. Raw linseed oil oxidizes very slowly, but Walton accelerated the process by heating it with lead acetate and zinc sulfate. This made the oil form a resinous mass into which lengths of cheap cotton cloth were dipped until a thick coating formed. The coating was then scraped off and boiled with benzene or similar solvents to form a varnish. Walton initially planned to sell his varnish to the makers of water-repellent fabrics such as oilcloth, and received Patent No. 209 on 27 January 1860 for the process. However, his method had problems: the cotton cloth soon fell apart, and it took months to produce enough of the linoxyn. Little interest was shown in Walton's varnish. In addition, his first factory burned down, and he suffered from persistent and painful rashes.
Walton soon came up with an easier way to transfer the oil to the cotton sheets, by hanging them vertically and sprinkling the oil from above, and he tried mixing the linoxyn with sawdust and cork dust to make it less tacky. In 1863, he applied for a further patent, which read: "For these purposes canvas or other suitable strong fabrics are coated over on their upper surfaces with a composition of oxidized oil, cork dust, and gum or resin ... such surfaces being afterward printed, embossed, or otherwise ornamented. The back or under surfaces of such fabrics are coated with a coating of such oxidized oils, or oxidized oils and gum or resin, and by preference without an admixture of cork."
At first, Walton called his invention "Kampticon", which was deliberately close to Kamptulicon, the name of an existing floor covering, but he soon changed it to Linoleum, which he derived from the Latin words linum (flax) and oleum (oil). In 1864, he established the Linoleum Manufacturing Company Ltd., with a factory at Staines, near London. The new product did not prove immediately popular, mainly due to intense competition from the makers of Kamptulicon and oilcloth. The company operated at a loss for its first five years, until Walton began an intensive advertising campaign and opened two shops in London for the exclusive sale of Linoleum. Walton's friend Jerimiah Clarke designed the linoleum patterns, typically with a Grecian urn motif around the borders.
Other inventors began their own experiments after Walton took out his patent, and in 1871, William Parnacott took out a patent for a method of producing linoxyn by blowing hot air into a tank of linseed oil for several hours, then cooling the material in trays. Unlike Walton's process, which took weeks, Parnacott's method took only a day or two, although the quality of the linoxyn was not as good. Despite this, many manufacturers opted to use the less expensive Parnacott process.
Walton soon faced competition from other manufacturers, including a company which bought the rights to Parnacott's process, and launched its own floor covering, which it named Corticine, from the Latin cortex (bark or rind). Corticine was mainly made of cork dust and linoxyn without a cloth backing, and became popular because it was cheaper than linoleum.
By 1869, Walton's factory in Staines, England was exporting to Europe and the United States. In 1877, the Scottish town of Kirkcaldy, in Fife, became the largest producer of linoleum in the world, with no fewer than six floorcloth manufacturers in the town, most notably Michael Nairn & Co., which had been producing floor cloth since 1847.
Walton opened the American Linoleum Manufacturing Company in 1872 on Staten Island, in partnership with Joseph Wild, the company's town being named Linoleumville (renamed Travis in 1930). It was the first U.S. linoleum manufacturer, but was soon followed by the American Nairn Linoleum Company, established by Sir Michael Nairn in 1887 (later the Congoleum-Nairn Company, and then the Congoleum Corporation of America), in Kearny, New Jersey. Congoleum now manufactures sheet vinyl and no longer has a linoleum line.
Loss of trademark protection
Walton was unhappy with Michael Nairn & Co's use of the name Linoleum and brought a lawsuit against them for trademark infringement. However, the term had not been trademarked, and he lost the suit, the court opining that even if the name had been registered as a trademark, it was by now so widely used that it had become generic, only 14 years after its invention. It is considered to be the first product name to become a generic term.
Use
Between the time of its invention in 1860 and its being largely superseded by other hard floor coverings in the 1950s, linoleum was considered to be an excellent, inexpensive material for high-use areas. In the late nineteenth and early twentieth centuries, it was favoured in hallways and passages, and as a surround for carpet squares. However, most people associate linoleum with its common twentieth century use on kitchen floors. Its water resistance enabled easy maintenance of sanitary conditions and its resilience made standing easier and reduced breakage of dropped china.
Other products devised by Walton included Linoleum Muralis in 1877, which became better known as Lincrusta. Essentially a highly durable linoleum wall covering, Lincrusta could be manufactured to resemble carved plaster or wood, or even leather. It was very successful, and inspired a much cheaper imitation, Anaglypta, originally devised by one of Walton's showroom managers.
Walton also tried integrating designs into linoleum during the manufacturing stage, coming up with granite, marbled, and jaspé (striped) linoleum. For the granite variety, granules of various colors of linoleum cement were mixed together, before being hot-rolled. If the granules were not completely mixed before rolling, the result was marbled or jaspé patterns.
Walton's next product was inlaid linoleum, which resembled encaustic tiles, in 1882. Previously, linoleum had been produced in solid colors, with patterns printed on the surface if required. In inlaid linoleum, the colors extend all the way through to the backing cloth. Inlaid linoleum was made using a stencil type method where different-colored granules were placed in shaped metal trays, after which the sheets were run through heated rollers to fuse them to the backing cloth. In 1898, Walton devised a process for making straight-line inlaid linoleum that allowed for crisp, sharp geometric designs. This involved strips of uncured linoleum being cut and pieced together patchwork-fashion before being hot-rolled. Embossed inlaid linoleum was not introduced until 1926.
The heavier gauges of linoleum are known as "battleship linoleum", and are mainly used in high-traffic situations like offices and public buildings. It was originally manufactured to meet the specifications of the U.S. Navy for warship deck covering on enclosed decks instead of wood, hence the name. Most U.S. Navy warships removed their linoleum deck coverings following the attack on Pearl Harbor, as they were considered too flammable. Use of linoleum persisted in U.S. Navy submarines.) Royal Navy warships used the similar product "Corticine".
Early in the twentieth century, a group of Dresden artists used easy-to-cut linoleum instead of wood for printmaking, creating the linocut printmaking technique – similar to woodcuts. Prominent artists who created linocut prints included Picasso and Henri Matisse.
Present day
As a floor covering, linoleum has often been replaced by polyvinyl chloride (PVC), referred to sometimes as vinyl. PVC has many properties that are superior to linoleum, including fire-resistance. Linoleum is still used in art for linocut prints. Linoleum is also considered an environmentally friendly alternative to PVC as it is derived from renewable, natural, biodegradable material.
References
Sources
External links
"Resilient Flooring: A Comparison of Vinyl, Linoleum and Cork": Sheila L. Jones, Georgia Tech Research Institute (Fall 1999)
Dominion Oilcloth and Linoleum Company illustrated catalogue, 1926
Historic linoleum materials in the Staten Island Historical Society's Online Collections Database
Brands that became generic
Composite materials
Floors | Linoleum | [
"Physics",
"Engineering"
] | 2,158 | [
"Structural engineering",
"Floors",
"Composite materials",
"Materials",
"Matter"
] |
671,711 | https://en.wikipedia.org/wiki/Hilbert%E2%80%93Smith%20conjecture | In mathematics, the Hilbert–Smith conjecture is concerned with the transformation groups of manifolds; and in particular with the limitations on topological groups G that can act effectively (faithfully) on a (topological) manifold M. Restricting to groups G which are locally compact and have a continuous, faithful group action on M, the conjecture states that G must be a Lie group.
Because of known structural results on G, it is enough to deal with the case where G is the additive group of p-adic integers, for some prime number p. An equivalent form of the conjecture is that has no faithful group action on a topological manifold.
The naming of the conjecture is for David Hilbert, and the American topologist Paul A. Smith. It is considered by some to be a better formulation of Hilbert's fifth problem, than the characterisation in the category of topological groups of the Lie groups often cited as a solution.
In 1997, Dušan Repovš and Evgenij Ščepin proved the Hilbert–Smith conjecture for groups acting by Lipschitz maps on a Riemannian manifold using covering, fractal, and cohomological dimension theory.
In 1999, Gaven Martin extended their dimension-theoretic argument to quasiconformal actions on a Riemannian manifold and gave applications concerning unique analytic continuation for Beltrami systems.
In 2013, John Pardon proved the three-dimensional case of the Hilbert–Smith conjecture.
References
Further reading
.
Topological groups
Group actions (mathematics)
Conjectures
Unsolved problems in geometry
Structures on manifolds | Hilbert–Smith conjecture | [
"Physics",
"Mathematics"
] | 315 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Group actions",
"Unsolved problems in geometry",
"Space (mathematics)",
"Topological spaces",
"Conjectures",
"Topological groups",
"Mathematical problems",
"Symmetry"
] |
671,814 | https://en.wikipedia.org/wiki/Warped%20product | Warped product of two Riemannian (or pseudo-Riemannian) manifolds and with respect to a function is the product space with the metric tensor .
Warped geometries are useful in that separation of variables can be used when solving partial differential equations over them.
Examples
Warped geometries acquire their full meaning when we substitute the variable y for t, time and x, for s, space. Then the f(y) factor of the spatial dimension becomes the effect of time that in words of Einstein "curves space". How it curves space will define one or other solution to a space-time world. For that reason, different models of space-time use warped geometries.
Many basic solutions of the Einstein field equations are warped geometries, for example, the Schwarzschild solution and the Friedmann–Lemaitre–Robertson–Walker models.
Also, warped geometries are the key building block of Randall–Sundrum models in string theory.
See also
Metric tensor
Exact solutions in general relativity
Poincaré half-plane model
References
Differential geometry
General relativity
String theory | Warped product | [
"Physics",
"Astronomy"
] | 224 | [
"Astronomical hypotheses",
"General relativity",
"Relativity stubs",
"Theory of relativity",
"String theory"
] |
671,821 | https://en.wikipedia.org/wiki/Randall%E2%80%93Sundrum%20model | In physics, Randall–Sundrum models (also called 5-dimensional warped geometry theory) are models that describe the world in terms of a warped-geometry higher-dimensional universe, or more concretely as a 5-dimensional anti-de Sitter space where the elementary particles (except the graviton) are localized on a (3 + 1)-dimensional brane or branes.
The two models were proposed in two articles in 1999 by Lisa Randall and Raman Sundrum because they were dissatisfied with the universal extra-dimensional models then in vogue. Such models require two fine tunings; one for the value of the bulk cosmological constant and the other for the brane tensions. Later, while studying RS models in the context of the anti-de Sitter / conformal field theory (AdS/CFT) correspondence, they showed how it can be dual to technicolor models.
The first of the two models, called RS1, has a finite size for the extra dimension with two branes, one at each end. The second, RS2, is similar to the first, but one brane has been placed infinitely far away, so that there is only one brane left in the model.
Overview
The model is a braneworld theory developed while trying to solve the hierarchy problem of the Standard Model. It involves a finite five-dimensional bulk that is extremely warped and contains two branes: the Planckbrane (where gravity is a relatively strong force; also called "Gravitybrane") and the Tevbrane (our home with the Standard Model particles; also called "Weakbrane"). In this model, the two branes are separated in the not-necessarily large fifth dimension by approximately 16 units (the units based on the brane and bulk energies). The Planckbrane has positive brane energy, and the Tevbrane has negative brane energy. These energies are the cause of the extremely warped spacetime.
Graviton probability function
In this warped spacetime that is only warped along the fifth dimension, the graviton's probability function is extremely high at the Planckbrane, but it drops exponentially as it moves closer towards the Tevbrane. In this, gravity would be much weaker on the Tevbrane than on the Planckbrane.
RS1 model
The RS1 model attempts to address the hierarchy problem. The warping of the extra dimension is analogous to the warping of spacetime in the vicinity of a massive object, such as a black hole. This warping, or red-shifting, generates a large ratio of energy scales, so that the natural energy scale at one end of the extra dimension is much larger than at the other end:
where k is some constant, and η has "−+++" metric signature. This space has boundaries at y = 1/k and y = 1/(Wk), with , where k is around the Planck scale, W is the warp factor, and Wk is around a TeV. The boundary at y = 1/k is called the Planck brane, and the boundary at y = 1/(Wk) is called the TeV brane. The particles of the standard model reside on the TeV brane. The distance between both branes is only −ln(W)/k, though.
In another coordinate system,
so that
and
RS2 model
The RS2 model uses the same geometry as RS1, but there is no TeV brane. The particles of the standard model are presumed to be on the Planck brane. This model was originally of interest because it represented an infinite 5-dimensional model, which, in many respects, behaved as a 4-dimensional model. This setup may also be of interest for studies of the AdS/CFT conjecture.
Prior models
In 1998/99 Merab Gogberashvili published on arXiv a number of articles on a very similar theme. He showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space, then there is a possibility to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of stability requirement, since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with the one of the conditions of stability.
Experimental results
In August 2016, experimental results from the LHC excluded RS gravitons with masses below 3.85 and 4.45 TeV for ˜k = 0.1 and 0.2 respectively and for ˜k = 0.01, graviton masses below 1.95 TeV, except for the region between 1.75 TeV and 1.85 TeV. Currently, the most stringent limits on RS graviton production.
See also
DGP model
Goldberger–Wise mechanism
Kaluza–Klein theory
ADD model
Scientific importance of GW170817, a neutron star merger
References
Sources
External links
Lisa Randall's web page at Harvard University
Raman Sundrum's web page at the University of Maryland
Physical cosmology
Particle physics
String theory
Quantum gravity | Randall–Sundrum model | [
"Physics",
"Astronomy"
] | 1,078 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Quantum gravity",
"Particle physics",
"String theory",
"Physics beyond the Standard Model",
"Physical cosmology"
] |
671,875 | https://en.wikipedia.org/wiki/Conformal%20symmetry | In mathematical physics, the conformal symmetry of spacetime is expressed by an extension of the Poincaré group, known as the conformal group; in layman's terms, it refers to the fact that stretching, compressing or otherwise distorting spacetime preserves the angles between lines or curves that exist within spacetime.
Conformal symmetry encompasses special conformal transformations and dilations. In three spatial plus one time dimensions, conformal symmetry has 15 degrees of freedom: ten for the Poincaré group, four for special conformal transformations, and one for a dilation.
Harry Bateman and Ebenezer Cunningham were the first to study the conformal symmetry of Maxwell's equations. They called a generic expression of conformal symmetry a spherical wave transformation. General relativity in two spacetime dimensions also enjoys conformal symmetry.
Generators
The Lie algebra of the conformal group has the following representation:
where are the Lorentz generators, generates translations, generates scaling transformations (also known as dilatations or dilations) and generates the special conformal transformations.
Commutation relations
The commutation relations are as follows:
other commutators vanish. Here is the Minkowski metric tensor.
Additionally, is a scalar and is a covariant vector under the Lorentz transformations.
The special conformal transformations are given by
where is a parameter describing the transformation. This special conformal transformation can also be written as , where
which shows that it consists of an inversion, followed by a translation, followed by a second inversion.
In two-dimensional spacetime, the transformations of the conformal group are the conformal transformations. There are infinitely many of them.
In more than two dimensions, Euclidean conformal transformations map circles to circles, and hyperspheres to hyperspheres with a straight line considered a degenerate circle and a hyperplane a degenerate hypercircle.
In more than two Lorentzian dimensions, conformal transformations map null rays to null rays and light cones to light cones, with a null hyperplane being a degenerate light cone.
Applications
Conformal field theory
In relativistic quantum field theories, the possibility of symmetries is strictly restricted by Coleman–Mandula theorem under physically reasonable assumptions. The largest possible global symmetry group of a non-supersymmetric interacting field theory is a direct product of the conformal group with an internal group. Such theories are known as conformal field theories.
Second-order phase transitions
One particular application is to critical phenomena in systems with local interactions. Fluctuations in such systems are conformally invariant at the critical point. That allows for classification of universality classes of phase transitions in terms of conformal field theories.
Conformal invariance is also present in two-dimensional turbulence at high Reynolds number.
High-energy physics
Many theories studied in high-energy physics admit conformal symmetry due to it typically being implied by local scale invariance. A famous example is d=4, N=4 supersymmetric Yang–Mills theory due its relevance for AdS/CFT correspondence. Also, the worldsheet in string theory is described by a two-dimensional conformal field theory coupled to two-dimensional gravity.
Mathematical proofs of conformal invariance in lattice models
Physicists have found that many lattice models become conformally invariant in the critical limit. However, mathematical proofs of these results have only appeared much later, and only in some cases.
In 2010, the mathematician Stanislav Smirnov was awarded the Fields medal "for the proof of conformal invariance of percolation and the planar Ising model in statistical physics".
In 2020, the mathematician Hugo Duminil-Copin and his collaborators proved that rotational invariance exists at the boundary between phases in many physical systems.
See also
Conformal map
Conformal group
Coleman–Mandula theorem
Renormalization group
Scale invariance
Superconformal algebra
Conformal Killing equation
References
Sources
Symmetry
Scaling symmetries
Conformal field theory | Conformal symmetry | [
"Physics",
"Mathematics"
] | 814 | [
"Scaling symmetries",
"Geometry",
"Symmetry"
] |
671,882 | https://en.wikipedia.org/wiki/Infrared%20fixed%20point | In physics, an infrared fixed point is a set of coupling constants, or other parameters, that evolve from arbitrary initial values at very high energies (short distance) to fixed, stable values, usually predictable, at low energies (large distance). This usually involves the use of the renormalization group, which specifically details the way parameters in a physical system (a quantum field theory) depend on the energy scale being probed.
Conversely, if the length-scale decreases and the physical parameters approach fixed values, then we have ultraviolet fixed points. The fixed points are generally independent of the initial values of the parameters over a large range of the initial values. This is known as universality.
Statistical physics
In the statistical physics of second order phase transitions, the physical system approaches an infrared fixed point that is independent of the
initial short distance dynamics that defines the material. This determines the properties of the phase transition at the critical temperature, or critical point. Observables, such as critical exponents usually depend only upon dimension of space, and are independent of the atomic or molecular constituents.
Top Quark
In the Standard Model, quarks and leptons have "Yukawa couplings" to the Higgs boson which determine the masses of the particles. Most of the quarks' and leptons' Yukawa couplings are small compared to the top quark's Yukawa coupling. Yukawa couplings are not constants and their properties change depending on the energy scale at which they are measured, this is known as running of the constants. The dynamics of Yukawa couplings are determined by the renormalization group equation:
where is the color gauge coupling (which is a function of and associated with asymptotic freedom ) and is the Yukawa coupling for the quark This equation describes how the Yukawa coupling changes with energy scale
A more complete version of the same formula is more appropriate for the top quark:
where is the weak isospin gauge coupling and is the weak hypercharge gauge coupling. For small or near constant values of and the qualitative behavior is the same.
The Yukawa couplings of the up, down, charm, strange and bottom quarks, are small at the extremely high energy scale of grand unification, Therefore, the term can be neglected in the above equation for all but the top quark. Solving, we then find that is increased slightly at the low energy scales at which the quark masses are generated by the Higgs,
On the other hand, solutions to this equation for large initial values typical for the top quark cause the expression on the right side to quickly approach zero as we descend in energy scale, which stops from changing and locks it to the QCD coupling This is known as a (infrared) quasi-fixed point of the renormalization group equation for the Yukawa coupling. No matter what the initial starting value of the coupling is, if it is sufficiently large at high energies to begin with, it will reach this quasi-fixed point value, and the corresponding quark mass is predicted to be about
The renormalization group equation for
large values of the top Yukawa coupling was first
considered in 1981 by Pendleton & Ross,
and the "infrared quasi-fixed point" was proposed by Hill.
The prevailing view at the time was that the top quark mass would lie in a range of 15 to 26 GeV. The quasi-infrared fixed point emerged in top quark condensation theories of electroweak symmetry breaking in which the Higgs boson is composite at extremely short distance scales, composed of a pair of top and anti-top quarks.
While the value of the quasi-fixed point is determined in the Standard Model of about if there is more than one Higgs doublet, the value will be reduced by an increase in the factor in the equation, and any Higgs mixing angle effects. Since the observed top quark mass of 174 GeV is slightly lower than the standard model prediction by about 20%, this suggests there may be
more Higgs doublets beyond the single standard model Higgs boson. If there are many additional Higgs doublets in nature the predicted value of the quasi-fixed point comes into agreement with experiment. Even if there are two Higgs doublets, the fixed point for the top mass is reduced, 170~200 GeV. Some theorists believed this was supporting evidence for the Supersymmetric Standard Model, however no other signs of supersymmetry have emerged at the Large Hadron Collider.
Banks–Zaks fixed point
Another example of an infrared fixed point is the Banks–Zaks fixed point in which the coupling constant of a Yang–Mills theory evolves to a fixed value. The beta-function vanishes, and the theory possesses a symmetry known as conformal symmetry.
Footnotes
See also
Top quark
Cutoff (physics)
References
Renormalization group
Statistical mechanics
Conformal field theory
Fixed points (mathematics) | Infrared fixed point | [
"Physics",
"Mathematics"
] | 1,013 | [
"Physical phenomena",
"Mathematical analysis",
"Fixed points (mathematics)",
"Critical phenomena",
"Renormalization group",
"Topology",
"Statistical mechanics",
"Dynamical systems"
] |
672,080 | https://en.wikipedia.org/wiki/Homentropic%20flow | In fluid mechanics, a homentropic flow has uniform and constant entropy. It distinguishes itself from an isentropic or particle isentropic flow, where the entropy level of each fluid particle does not change with time, but may vary from particle to particle. This means that a homentropic flow is necessarily isentropic, but an isentropic flow need not be homentropic.
A homentropic and perfect gas is an example of a barotropic fluid where the pressure and density are related bywhere is a constant.
Thermodynamic entropy
Fluid dynamics | Homentropic flow | [
"Physics",
"Chemistry",
"Engineering"
] | 129 | [
"Physical quantities",
"Chemical engineering",
"Thermodynamic entropy",
"Entropy",
"Piping",
"Statistical mechanics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
672,202 | https://en.wikipedia.org/wiki/Yang%E2%80%93Mills%20theory | Yang–Mills theory is a quantum field theory for nuclear binding devised by Chen Ning Yang and Robert Mills in 1953, as well as a generic term for the class of similar theories. The Yang–Mills theory is a gauge theory based on a special unitary group , or more generally any compact Lie group. A Yang–Mills theory seeks to describe the behavior of elementary particles using these non-abelian Lie groups and is at the core of the unification of the electromagnetic force and weak forces (i.e. ) as well as quantum chromodynamics, the theory of the strong force (based on ). Thus it forms the basis of the understanding of the Standard Model of particle physics.
History and qualitative description
Gauge theory in electrodynamics
All known fundamental interactions can be described in terms of gauge theories, but working this out took decades. Hermann Weyl's pioneering work on this project started in 1915 when his colleague Emmy Noether proved that every conserved physical quantity has a matching symmetry, and culminated in 1928 when he published his book applying the geometrical theory of symmetry (group theory) to quantum mechanics. Weyl named the relevant symmetry in Noether's theorem the "gauge symmetry", by analogy to distance standardization in railroad gauges.
Erwin Schrödinger in 1922, three years before working on his equation, connected Weyl's group concept to electron charge. Schrödinger showed that the group produced a phase shift in electromagnetic fields that matched the conservation of electric charge. As the theory of quantum electrodynamics developed in the 1930's and 1940's the group transformations played a central role. Many physicists thought there must be an analog for the dynamics of nucleons.
Chen Ning Yang in particular was obsessed with this possibility.
Yang and Mills find the nuclear force gauge theory
Yang's core idea was to look for a conserved quantity in nuclear physics comparable to electric charge and use it to develop a corresponding gauge theory comparable to electrodynamics. He settled on conservation of isospin, a quantum number that distinguishes a neutron from a proton, but he made no progress on a theory. Taking a break from Princeton in the summer of 1953, Yang met a collaborator who could help: Robert Mills. As Mills himself describes:"During the academic year 1953–1954, Yang was a visitor to Brookhaven National Laboratory ... I was at Brookhaven also ... and was assigned to the same office as Yang. Yang, who has demonstrated on a number of occasions his generosity to physicists beginning their careers, told me about his idea of generalizing gauge invariance and we discussed it at some length ... I was able to contribute something to the discussions, especially with regard to the quantization procedures, and to a small degree in working out the formalism; however, the key ideas were Yang's."
In the summer 1953, Yang and Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to non-abelian groups, selecting the group to provide an explanation for isospin conservation in collisions involving the strong interactions. Yang's presentation of the work at Princeton in February 1954 was challenged by Pauli, asking about the mass in the field developed with the gauge invariance idea. Pauli knew that this might be an issue as he had worked on applying gauge invariance but chose not to publish it, viewing the massless excitations of the theory to be "unphysical 'shadow particles'". Yang and Mills published in October 1954; near the end of the paper, they admit:
This problem of unphysical massless excitation blocked further progress.
The idea was set aside until 1960, when the concept of particles acquiring mass through symmetry breaking in massless theories was put forward, initially by Jeffrey Goldstone, Yoichiro Nambu, and Giovanni Jona-Lasinio. This prompted a significant restart of Yang–Mills theory studies that proved successful in the formulation of both electroweak unification and quantum chromodynamics (QCD). The electroweak interaction is described by the gauge group , while QCD is an Yang–Mills theory. The massless gauge bosons of the electroweak mix after spontaneous symmetry breaking to produce the three massive bosons of the weak interaction (, , and ) as well as the still-massless photon field. The dynamics of the photon field and its interactions with matter are, in turn, governed by the gauge theory of quantum electrodynamics. The Standard Model combines the strong interaction with the unified electroweak interaction (unifying the weak and electromagnetic interaction) through the symmetry group . In the current epoch the strong interaction is not unified with the electroweak interaction, but from the observed running of the coupling constants it is believed they all converge to a single value at very high energies.
Phenomenology at lower energies in quantum chromodynamics is not completely understood due to the difficulties of managing such a theory with a strong coupling. This may be the reason why confinement has not been theoretically proven, though it is a consistent experimental observation. This shows why QCD confinement at low energy is a mathematical problem of great relevance, and why the Yang–Mills existence and mass gap problem is a Millennium Prize Problem.
Parallel work on non-Abelian gauge theories
In 1953, in a private correspondence, Wolfgang Pauli formulated a six-dimensional theory of Einstein's field equations of general relativity, extending the five-dimensional theory of Kaluza, Klein, Fock, and others to a higher-dimensional internal space. However, there is no evidence that Pauli developed the Lagrangian of a gauge field or the quantization of it. Because Pauli found that his theory "leads to some rather unphysical shadow particles", he refrained from publishing his results formally. Although Pauli did not publish his six-dimensional theory, he gave two seminar lectures about it in Zürich in November 1953.
In January 1954 Ronald Shaw, a graduate student at the University of Cambridge also developed a non-Abelian gauge theory for nuclear forces.
However, the theory needed massless particles in order to maintain gauge invariance. Since no such massless particles were known at the time, Shaw and his supervisor Abdus Salam chose not to publish their work.
Shortly after Yang and Mills published their paper in October 1954, Salam encouraged Shaw to publish his work to mark his contribution. Shaw declined, and instead it only forms a chapter of his PhD thesis published in 1956.
Mathematical overview
Yang–Mills theories are special examples of gauge theories with a non-abelian symmetry group given by the Lagrangian
with the generators of the Lie algebra, indexed by , corresponding to the -quantities (the curvature or field-strength form) satisfying
Here, the are structure constants of the Lie algebra (totally antisymmetric if the generators of the Lie algebra are normalised such that is proportional to ), the covariant derivative is defined as
is the identity matrix (matching the size of the generators), is the vector potential, and is the coupling constant. In four dimensions, the coupling constant is a pure number and for a group one has
The relation
can be derived by the commutator
The field has the property of being self-interacting and the equations of motion that one obtains are said to be semilinear, as nonlinearities are both with and without derivatives. This means that one can manage this theory only by perturbation theory with small nonlinearities.
Note that the transition between "upper" ("contravariant") and "lower" ("covariant") vector or tensor components is trivial for a indices (e.g. ), whereas for μ and ν it is nontrivial, corresponding e.g. to the usual Lorentz signature,
From the given Lagrangian one can derive the equations of motion given by
Putting these can be rewritten as
A Bianchi identity holds
which is equivalent to the Jacobi identity
since Define the dual strength tensor
then the Bianchi identity can be rewritten as
A source enters into the equations of motion as
Note that the currents must properly change under gauge group transformations.
We give here some comments about the physical dimensions of the coupling. In dimensions, the field scales as and so the coupling must scale as This implies that Yang–Mills theory is not renormalizable for dimensions greater than four. Furthermore, for the coupling is dimensionless and both the field and the square of the coupling have the same dimensions of the field and the coupling of a massless quartic scalar field theory. So, these theories share the scale invariance at the classical level.
Quantization
A method of quantizing the Yang–Mills theory is by functional methods, i.e. path integrals. One introduces a generating functional for -point functions as
but this integral has no meaning as it is because the potential vector can be arbitrarily chosen due to the gauge freedom. This problem was already known for quantum electrodynamics but here becomes more severe due to non-abelian properties of the gauge group. A way out has been given by Ludvig Faddeev and Victor Popov with the introduction of a ghost field (see Faddeev–Popov ghost) that has the property of being unphysical since, although it agrees with Fermi–Dirac statistics, it is a complex scalar field, which violates the spin–statistics theorem. So, we can write the generating functional as
being
for the field,
for the gauge fixing and
for the ghost. This is the expression commonly used to derive Feynman's rules (see Feynman diagram). Here we have for the ghost field while fixes the gauge's choice for the quantization. Feynman's rules obtained from this functional are the following
These rules for Feynman's diagrams can be obtained when the generating functional given above is rewritten as
with
being the generating functional of the free theory. Expanding in and computing the functional derivatives, we are able to obtain all the -point functions with perturbation theory. Using LSZ reduction formula we get from the -point functions the corresponding process amplitudes, cross sections and decay rates. The theory is renormalizable and corrections are finite at any order of perturbation theory.
For quantum electrodynamics the ghost field decouples because the gauge group is abelian. This can be seen from the coupling between the gauge field and the ghost field that is For the abelian case, all the structure constants are zero and so there is no coupling. In the non-abelian case, the ghost field appears as a useful way to rewrite the quantum field theory without physical consequences on the observables of the theory such as cross sections or decay rates.
One of the most important results obtained for Yang–Mills theory is asymptotic freedom. This result can be obtained by assuming that the coupling constant is small (so small nonlinearities), as for high energies, and applying perturbation theory. The relevance of this result is due to the fact that a Yang–Mills theory that describes strong interaction and asymptotic freedom permits proper treatment of experimental results coming from deep inelastic scattering.
To obtain the behavior of the Yang–Mills theory at high energies, and so to prove asymptotic freedom, one applies perturbation theory assuming a small coupling. This is verified a posteriori in the ultraviolet limit. In the opposite limit, the infrared limit, the situation is the opposite, as the coupling is too large for perturbation theory to be reliable. Most of the difficulties that research meets is just managing the theory at low energies. That is the interesting case, being inherent to the description of hadronic matter and, more generally, to all the observed bound states of gluons and quarks and their confinement (see hadrons). The most used method to study the theory in this limit is to try to solve it on computers (see lattice gauge theory). In this case, large computational resources are needed to be sure the correct limit of infinite volume (smaller lattice spacing) is obtained. This is the limit the results must be compared with. Smaller spacing and larger coupling are not independent of each other, and larger computational resources are needed for each. As of today, the situation appears somewhat satisfactory for the hadronic spectrum and the computation of the gluon and ghost propagators, but the glueball and hybrids spectra are yet a questioned matter in view of the experimental observation of such exotic states. Indeed, the resonance
is not seen in any of such lattice computations and contrasting interpretations have been put forward. This is a hotly debated issue.
Open problems
Yang–Mills theories met with general acceptance in the physics community after Gerard 't Hooft, in 1972, worked out their renormalization, relying on a formulation of the problem worked out by his advisor Martinus Veltman.
Renormalizability is obtained even if the gauge bosons described by this theory are massive, as in the electroweak theory, provided the mass is only an "acquired" one, generated by the Higgs mechanism.
The mathematics of the Yang–Mills theory is a very active field of research, yielding e.g. invariants of differentiable structures on four-dimensional manifolds via work of Simon Donaldson. Furthermore, the field of Yang–Mills theories was included in the Clay Mathematics Institute's list of "Millennium Prize Problems". Here the prize-problem consists, especially, in a proof of the conjecture that the lowest excitations of a pure Yang–Mills theory (i.e. without matter fields) have a finite mass-gap with regard to the vacuum state. Another open problem, connected with this conjecture, is a proof of the confinement property in the presence of additional fermions.
In physics the survey of Yang–Mills theories does not usually start from perturbation analysis or analytical methods, but more recently from systematic application of numerical methods to lattice gauge theories.
See also
Aharonov–Bohm effect
Coulomb gauge
Deformed Hermitian Yang–Mills equations
Gauge covariant derivative
Gauge theory (mathematics)
Hermitian Yang–Mills equations
Kaluza–Klein theory
Lattice gauge theory
Lorenz gauge
= 4 supersymmetric Yang–Mills theory
Propagator
Quantum gauge theory
Field theoretical formulation of the standard model
Symmetry in physics
Two-dimensional Yang–Mills theory
Weyl gauge
Yang–Mills equations
F-Yang–Mills equations
Bi-Yang–Mills equations
Yang–Mills existence and mass gap
Yang–Mills–Higgs equations
References
Further reading
Books
Articles
External links
Gauge theories
Symmetry | Yang–Mills theory | [
"Physics",
"Mathematics"
] | 3,030 | [
"Geometry",
"Symmetry"
] |
672,259 | https://en.wikipedia.org/wiki/Batalin%E2%80%93Vilkovisky%20formalism | In theoretical physics, the Batalin–Vilkovisky (BV) formalism (named for Igor Batalin and Grigori Vilkovisky) was developed as a method for determining the ghost structure for Lagrangian gauge theories, such as gravity and supergravity, whose corresponding Hamiltonian formulation has constraints not related to a Lie algebra (i.e., the role of Lie algebra structure constants are played by more general structure functions). The BV formalism, based on an action that contains both fields and "antifields", can be thought of as a vast generalization of the original BRST formalism for pure Yang–Mills theory to an arbitrary Lagrangian gauge theory. Other names for the Batalin–Vilkovisky formalism are field-antifield formalism, Lagrangian BRST formalism, or BV–BRST formalism. It should not be confused with the Batalin–Fradkin–Vilkovisky (BFV) formalism, which is the Hamiltonian counterpart.
Batalin–Vilkovisky algebras
In mathematics, a Batalin–Vilkovisky algebra is a graded supercommutative algebra (with a unit 1) with a second-order nilpotent operator Δ of degree −1. More precisely, it satisfies the identities
(The product is associative)
(The product is (super-)commutative)
(The product has degree 0)
(Δ has degree −1)
(Nilpotency (of order 2))
The Δ operator is of second order:
One often also requires normalization:
(normalization)
Antibracket
A Batalin–Vilkovisky algebra becomes a Gerstenhaber algebra if one defines the Gerstenhaber bracket by
Other names for the Gerstenhaber bracket are Buttin bracket, antibracket, or odd Poisson bracket. The antibracket satisfies
(The antibracket (,) has degree −1)
(Skewsymmetry)
(The Jacobi identity)
(The Poisson property; the Leibniz rule)
Odd Laplacian
The normalized operator is defined as
It is often called the odd Laplacian, in particular in the context of odd Poisson geometry. It "differentiates" the antibracket
(The operator differentiates (,))
The square of the normalized operator is a Hamiltonian vector field with odd Hamiltonian Δ(1)
(The Leibniz rule)
which is also known as the modular vector field. Assuming normalization Δ(1)=0, the odd Laplacian is just the Δ operator, and the modular vector field vanishes.
Compact formulation in terms of nested commutators
If one introduces the left multiplication operator as
and the supercommutator [,] as
for two arbitrary operators S and T, then the definition of the antibracket may be written compactly as
and the second order condition for Δ may be written compactly as
(The Δ operator is of second order)
where it is understood that the pertinent operator acts on the unit element 1. In other words, is a first-order (affine) operator, and is a zeroth-order operator.
Master equation
The classical master equation for an even degree element S (called the action) of a Batalin–Vilkovisky algebra is the equation
The quantum master equation for an even degree element W of a Batalin–Vilkovisky algebra is the equation
or equivalently,
Assuming normalization Δ(1) = 0, the quantum master equation reads
Generalized BV algebras
In the definition of a generalized BV algebra, one drops the second-order assumption for Δ. One may then define an infinite hierarchy of higher brackets of degree −1
The brackets are (graded) symmetric
(Symmetric brackets)
where is a permutation, and is the Koszul sign of the permutation
.
The brackets constitute a homotopy Lie algebra, also known as an algebra, which satisfies generalized Jacobi identities
(Generalized Jacobi identities)
The first few brackets are:
(The zero-bracket)
(The one-bracket)
(The two-bracket)
(The three-bracket)
In particular, the one-bracket is the odd Laplacian, and the two-bracket is the antibracket up to a sign. The first few generalized Jacobi identities are:
( is -closed)
( is the Hamiltonian for the modular vector field )
(The operator differentiates (,) generalized)
(The generalized Jacobi identity)
where the Jacobiator for the two-bracket is defined as
BV n-algebras
The Δ operator is by definition of n'th order if and only if the (n + 1)-bracket vanishes. In that case, one speaks of a BV n-algebra. Thus a BV 2-algebra is by definition just a BV algebra. The Jacobiator vanishes within a BV algebra, which means that the antibracket here satisfies the Jacobi identity. A BV 1-algebra that satisfies normalization Δ(1) = 0 is the same as a differential graded algebra (DGA) with differential Δ. A BV 1-algebra has vanishing antibracket.
Odd Poisson manifold with volume density
Let there be given an (n|n) supermanifold with an odd Poisson bi-vector and a Berezin volume density , also known as a P-structure and an S-structure, respectively. Let the local coordinates be called . Let the derivatives and
denote the left and right derivative of a function f wrt. , respectively. The odd Poisson bi-vector satisfies more precisely
(The odd Poisson structure has degree –1)
(Skewsymmetry)
(The Jacobi identity)
Under change of coordinates the odd Poisson bi-vector
and Berezin volume density transform as
where sdet denotes the superdeterminant, also known as the Berezinian.
Then the odd Poisson bracket is defined as
A Hamiltonian vector field with Hamiltonian f can be defined as
The (super-)divergence of a vector field is defined as
Recall that Hamiltonian vector fields are divergencefree in even Poisson geometry because of Liouville's Theorem.
In odd Poisson geometry the corresponding statement does not hold. The odd Laplacian measures the failure of Liouville's Theorem. Up to a sign factor, it is defined as one half the divergence of the corresponding Hamiltonian vector field,
The odd Poisson structure and Berezin volume density are said to be compatible if the modular vector field vanishes. In that case the odd Laplacian is a BV Δ operator with normalization Δ(1)=0. The corresponding BV algebra is the algebra of functions.
Odd symplectic manifold
If the odd Poisson bi-vector is invertible, one has an odd symplectic manifold. In that case, there exists an odd Darboux Theorem. That is, there exist local Darboux coordinates, i.e., coordinates , and momenta , of degree
such that the odd Poisson bracket is on Darboux form
In theoretical physics, the coordinates and momenta are called fields and antifields, and are typically denoted and , respectively.
acts on the vector space of semidensities, and is a globally well-defined operator on the atlas of Darboux neighborhoods. Khudaverdian's operator depends only on the P-structure. It is manifestly nilpotent , and of degree −1. Nevertheless, it is technically not a BV Δ operator as the vector space of semidensities has no multiplication. (The product of two semidensities is a density rather than a semidensity.) Given a fixed density , one may construct a nilpotent BV Δ operator as
whose corresponding BV algebra is the algebra of functions, or equivalently, scalars. The odd symplectic structure and density are compatible if and only if Δ(1) is an odd constant.
Examples
The Schouten–Nijenhuis bracket for multi-vector fields is an example of an antibracket.
If L is a Lie superalgebra, and Π is the operator exchanging the even and odd parts of a super space, then the symmetric algebra of Π(L) (the "exterior algebra" of L) is a Batalin–Vilkovisky algebra with Δ given by the usual differential used to compute Lie algebra cohomology.
See also
BRST quantization
Gerstenhaber algebra
Supermanifold
Analysis of flows
Poisson manifold
References
Pedagogical
Costello, K. (2011). "Renormalization and Effective Field Theory". (Explains perturbative quantum field theory and the rigorous aspects, such as quantizing Chern-Simons theory and Yang-Mills theory using BV-formalism)
References
Erratum-ibid. 30 (1984) 508 .
Algebras
Gauge theories
Supersymmetry
Symplectic geometry
Theoretical physics | Batalin–Vilkovisky formalism | [
"Physics",
"Mathematics"
] | 1,914 | [
"Mathematical structures",
"Algebras",
"Theoretical physics",
"Unsolved problems in physics",
"Algebraic structures",
"Physics beyond the Standard Model",
"Supersymmetry",
"Symmetry"
] |
673,356 | https://en.wikipedia.org/wiki/Supercharge | In theoretical physics, a supercharge is a generator of supersymmetry transformations. It is an example of the general notion of a charge in physics.
Supercharge, denoted by the symbol Q, is an operator which transforms bosons into fermions, and vice versa. Since the supercharge operator changes a particle with spin one-half to a particle with spin one or zero, the supercharge itself is a spinor that carries one half unit of spin.
Depending on the context, supercharges may also be called Grassmann variables or Grassmann directions; they are generators of the exterior algebra of anti-commuting numbers, the Grassmann numbers. All these various usages are essentially synonymous; they refer to the grading between bosons and fermions, or equivalently, the grading between c-numbers and a-numbers. Calling it a charge emphasizes the notion of a symmetry at work.
Commutation
Supercharge is described by the super-Poincaré algebra.
Supercharge commutes with the Hamiltonian operator:
[ Q , H ] = 0
So does its adjoint.
See also
R-symmetry
References
Supersymmetry | Supercharge | [
"Physics"
] | 244 | [
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum physics stubs",
"Physics beyond the Standard Model",
"Supersymmetry",
"Symmetry"
] |
673,711 | https://en.wikipedia.org/wiki/Verneshot | A verneshot (named after French author Jules Verne) is a hypothetical volcanic eruption event caused by the buildup of gas deep underneath a craton. Such an event may be forceful enough to launch an extreme amount of material from the crust and mantle into a sub-orbital trajectory, leading to significant further damage after the material crashes back down to the surface.
Connection with mass extinctions
Verneshots have been proposed as a causal mechanism explaining the statistically unlikely contemporaneous occurrence of continental flood basalts, mass extinctions, and "impact signals" (such as planar deformation features, shocked quartz, and iridium anomalies) traditionally considered definitive evidence of hypervelocity impact events.<ref name="Morgan_CME"> (First submitted 17 April 2003). For an informal introduction see Professor Jason Phipps Morgan's faculty biography at Cornell University from May 2004: I became interested in the causes of mass-extinctions, in particular worrying about the 'too-many-coincidences' problem that these periods appear to be associated (if we believe what's published in the mainstream literature) with BOTH extremely rare continental flood basalts and continental rifting, and even rarer 'impact signals' commonly presumed to come from large extraterrestrial bolide impacts. Our recently published Verneshot hypothesis is our best guess on how to explain these coincidences in a self-consistent causal manner.'''</ref>
The verneshot theory suggests that mantle plumes may cause heating and the buildup of carbon dioxide gas underneath continental lithosphere. If continental rifting occurs above this location, an explosive release of the built up gas may occur, potentially sending out a column of crust and mantle into a globally dispersive, super-stratospheric trajectory. It is unclear whether such a column could stay coherent through this process, or whether the force of this process would result in it shattering into much smaller pieces before impacting. The pipe through which the magma and gas had travelled would collapse during this process, sending a shockwave at hypersonic velocity that would deform the surrounding craton.
A verneshot event is likely to be related to nearby continental flood basalt events, which may occur before, during or after the verneshot event. This may help in searching for evidence for the results of verneshot events; however, it is also quite probable that most of such evidence will be buried underneath the basalt flows, making investigation difficult. J. Phipps Morgan and others have suggested that subcircular Bouguer gravity anomalies recognized beneath the Deccan Traps may indicate the presence of verneshot pipes related to the Cretaceous–Paleogene extinction event.
If the Deccan Traps were the location of a verneshot event at the Cretaceous–Paleogene boundary, the strong iridium spike at the Cretaceous–Paleogene boundary could be explained by the iridium-rich nature of volatiles in the Reunion mantle plume, which is currently beneath Piton de la Fournaise, but during the end Cretaceous was located beneath India in the area of the Deccan Traps; the verneshot event could potentially distribute the iridium globally.
Tunguska event
A verneshot has been proposed as an alternate explanation for the Tunguska event, widely regarded as the result of an atmospheric explosion of a small comet or asteroid. Arguments offered for this mechanism include the lack of extraterrestrial material at the event site, the lack of a credible impact structure, and the presence of shocked quartz in surface outcrops. However, this hypothesis has not been generally accepted, with Mark Boslough arguing that there is no basis for rejecting the impact hypothesis.
Name
In 1865 Jules Verne's novel From the Earth to the Moon'' introduced the concept of a ballistic projectile escaping the Earth's gravity, from which Phipps Morgan and others derived the name "Verneshot" in their paper theorizing a connection between extinction events and cratonic gas ejection.
References
Volcanology
Jules Verne
Extinction events
Geological hazards
Doomsday scenarios
Natural disasters
Tunguska event | Verneshot | [
"Physics",
"Biology"
] | 847 | [
"Physical phenomena",
"Evolution of the biosphere",
"Weather",
"Unsolved problems in physics",
"Extinction events",
"Natural disasters",
"Tunguska event"
] |
2,368,003 | https://en.wikipedia.org/wiki/GABAA-rho%20receptor | {{DISPLAYTITLE:GABAA-rho receptor}}
The GABAA-rho receptor (previously known as the GABAC receptor) is a subclass of GABAA receptors composed entirely of rho (ρ) subunits. GABAA receptors including those of the ρ-subclass are ligand-gated ion channels responsible for mediating the effects of gamma-amino butyric acid (GABA), the major inhibitory neurotransmitter in the brain. The GABAA-ρ receptor, like other GABAA receptors, is expressed in many areas of the brain, but in contrast to other GABAA receptors, the GABAA-ρ receptor has especially high expression in the retina.
Nomenclature
A second type of ionotropic GABA receptor, insensitive to typical allosteric modulators of GABAA receptor channels such as benzodiazepines and barbiturates, was designated GABAС receptor. Native responses of the GABAC receptor type occur in retinal bipolar or horizontal cells across vertebrate species.
GABAС receptors are exclusively composed of ρ (rho) subunits that are related to GABAA receptor subunits. Although the term "GABAС receptor" is frequently used, GABAС may be viewed as a variant within the GABAA receptor family. Others have argued that the differences between GABAС and GABAA receptors are large enough to justify maintaining the distinction between these two subclasses of GABA receptors. However, since GABAС receptors are closely related in sequence, structure, and function to GABAA receptors and since other GABAA receptors besides those containing ρ subunits appear to exhibit GABAС pharmacology, the Nomenclature Committee of the IUPHAR has recommended that the GABAС term no longer be used and these ρ receptors should be designated as the ρ subfamily of the GABAA receptors (GABAA-ρ).
Function
In addition to containing a GABA binding site, the GABAA-ρ receptor complex conducts chloride ions across neuronal membranes. Binding of GABA to the receptor results in opening of this channel. When the reversal potential of chloride is less than the membrane potential, chloride ions flow down their electrochemical gradient into the cell. This influx of chloride ions lowers the membrane potential of the neuron, thus hyperpolarizes it, making it more difficult for these cells to conduct electrical impulses in the form of an action potential. Following stimulation by GABA, the chloride current produced by GABAA-ρ receptors is slow to initiate but sustained in duration. In contrast, the GABAA receptor current has a rapid onset and short duration. GABA is about 10 times more potent at GABAA-ρ than it is at most GABAA receptors.
Structure
Like other ligand-gated ion channels, the GABAA-ρ chloride channel is formed by oligomerization of five subunits arranged about a fivefold symmetry axis to form a central ion conducting pore. To date, three GABAA-ρ receptor subunits have been identified in humans:
ρ1 ()
ρ2 ()
ρ3 ()
The above three subunits coassemble either to form functional homo-pentamers (ρ15, ρ25, ρ35) or hetero-pentamers (ρ1mρ2n, ρ2mρ3n where m + n = 5).
There is also evidence that ρ1 subunits can form hetero-pentameric complexes with GABAA receptor γ2 subunits.
Pharmacology
There are several pharmacological differences that distinguish GABAA-ρ from GABAA and GABAB receptors. For example, GABAA-ρ receptors are:
selectively activated by (+)-CAMP [(+)-cis-2-aminomethylcyclopropane-carboxylic acid] and blocked by TPMPA [(1,2,5,6-tetrahydropyridin-4-yl)methylphosphinic acid];
not sensitive to the GABAB agonist baclofen nor the GABAA receptor antagonist bicuculline;
not modulated by many GABAA receptor modulators such as barbiturates and benzodiazepines, but are modulated selectively by certain neuroactive steroids.
Selective Ligands
Agonists
CACA
CAMP
GABOB
Muscimol
Antagonists
Mixed GABAA-ρ / GABAB antagonists
ZAPA ((Z)-3-[(Aminoiminomethyl)thio]prop-2-enoic acid)
SKF-97541 (3-Aminopropyl(methyl)phosphinic acid)
CGP-36742 (3-aminopropyl-n-butyl-phosphinic acid)
Selective GABAA-ρ antagonists
TPMPA
(±)-cis-(3-Aminocyclopentyl)butylphosphinic acid
(S)-(4-Aminocyclopent-1-enyl)butylphosphinic acid
N2O
Genetics
In humans, GABAA-ρ receptor subunits ρ1 and ρ2 are encoded by the and genes which are found on chromosome 6 whereas the gene for ρ3 is found on chromosome 3. Mutations in the ρ1 or ρ2 genes may be responsible for some cases of autosomal recessive retinitis pigmentosa.
References
Transmembrane receptors
GABA | GABAA-rho receptor | [
"Chemistry"
] | 1,166 | [
"Transmembrane receptors",
"Signal transduction"
] |
2,368,411 | https://en.wikipedia.org/wiki/Diesel%20exhaust%20fluid | Diesel exhaust fluid (DEF; also known as AUS 32 and sometimes marketed as AdBlue) is a liquid used to reduce the amount of air pollution created by a diesel engine. Specifically, DEF is an aqueous urea solution made with 32.5% urea and 67.5% deionized water. DEF is consumed in a selective catalytic reduction (SCR) that lowers the concentration of nitrogen oxides () in the diesel exhaust emissions from a diesel engine.
Other names
In the international standard defining DEF (ISO 22241), it is referred to as AUS 32 (aqueous urea solution 32%). DEF is also sold as AdBlue, a registered trademark of the German Association of the Automotive Industry.
Several brands of SCR systems use DEF: BlueHDI is used by PSA Group vehicles including Peugeot, Citroën, and DS Automobiles brands; BlueTec by Daimler AG; and FLENDS (Final Low Emission New Diesel System) by UD Trucks. Blue Sky DEF is made and distributed for retail sale by Prime Lubes, Inc.
Background
Diesel engines are typically operated with a lean burn air-to-fuel ratio (over-stoichiometric ratio) to ensure the full combustion of soot and to prevent them from exhausting unburnt fuel. The excess air leads to the generation of , which are harmful pollutants, from nitrogen in the atmosphere. SCR is used to reduce the amount of released into the atmosphere. DEF from a separate tank is injected into the exhaust pipeline, and the exhaust heat decomposes it to ammonia. Within the SCR catalyst, the are reduced by the ammonia into water and nitrogen, which are both nonpolluting. The water and nitrogen are then released into the atmosphere through the exhaust.
SCR was applied to automobiles by Nissan Diesel Corporation, and the first practical product "Nissan Diesel Quon" was introduced in 2004. With the cooperation of the oil and chemical industry, a 1,300-station infrastructure to supply DEF was prepared by September 2005 in Japan.
In 2007, the United States Environmental Protection Agency (EPA) enacted requirements to significantly reduce harmful exhaust emissions. To achieve this standard, Cummins and other diesel engine manufacturers developed an aftertreatment system that includes the use of a diesel particulate filter (DPF).
As the DPF does not function with low-sulfur diesel fuel, diesel engines that conform to 2007 EPA emissions standards require ultra-low-sulfur diesel (ULSD) fuel to prevent damage to the DPF. After a brief transition period, ULSD fuel became common at fuel pumps in the United States and Canada.
The 2007 EPA regulations were meant to be an interim solution to allow manufacturers time to prepare for the more stringent 2010 EPA regulations, which reduced levels even further. In 2008, the concerns about compliance shifted to the infrastructure for DEF distribution.
The injection rate of DEF into the exhaust depends on the specific after-treatment system, but is typically 2–6% of diesel consumption volume. This low dosing rate ensures long fluid refill intervals and minimizes the tank's size and intrusion into vehicle packaging space. An electronic control unit adjusts the addition of fluid in accordance with parameters such as level in the exhaust gas (before catalytic converter, after catalytic converter, and possibly between catalytic converters if there is more than one), current ammonia filling level, engine operating temperature and speed.
Chemistry
DEF is a 32.5% solution of urea, . When it is injected into the hot exhaust gas stream, the water evaporates and the urea thermally decomposes to form ammonia () and isocyanic acid (HNCO):
→ + HNCO
The isocyanic acid reacts with the water vapor and hydrolyses to carbon dioxide and ammonia:
HNCO + → +
Overall, thus far:
+ → 2 +
Ammonia, in the presence of oxygen and a catalyst, reduces two different nitrogen oxides:
4 NO + 4 + → 4 + 6 ("standard SCR") and
6 + 8 → 7 + 12 ("NO2 SCR selective catalytic reduction")
+ + 2 → 2 + 3 ("fast SCR")
The overall reduction of by urea is then:
2 + 4 + → 4 + 4 + 2 and
4 + 6 → 7 + 8 + 4 and
+ + → 2 + 2 +
The ratio between and determines which reactions take place and how fast. The highest conversion rates are achieved if equal amounts of and are present, especially at temperatures between 200 °C and 350 °C. If there is more than , fast SCR and standard SCR take place sequentially. If there is more than , fast SCR and NO2 SCR take place sequentially, however, NO2 SCR is slower than standard SCR, and ammonium nitrate can form and temporarily deactivate the catalytic converter.
Operation in winter time
DEF freezes at . For the SCR exhaust cleaning system to function at low temperatures, a sufficient amount of the frozen DEF must be melted in as short time as possible, preferably in the order of minutes. For example, 2010 EPA emissions requirements require full DEF coolant flow within 70 minutes.
In Europe, Regulation (EC) No 692/2008 specified in Annex XVI point 10 that DEF from a frozen tank at a core temperature of must become available within 20 minutes when starting the engine at .
Typically, the frozen DEF is melted by heat from the engine, e.g. engine coolant passing through the DEF tank, governed by a thermostatic coolant control valve. This method may take significant time before the SCR exhaust cleaning system is fully operational, often up to an hour.
Another method to thaw DEF (and thus allow for full SCR operation) is to integrate an electric heater into the DEF tank. This heater must be sized, positioned, and powered adequately to rapidly melt sufficient frozen DEF. It should preferably be self regulating not to overheat if (part of) the heater is outside of the liquid. It should also preferably be self regulating to eliminate any complicated sensor and temperature regulating systems. Furthermore, the heater should not exceed , as DEF begins to decompose at around . PTC heaters are often used to achieve this.
Safety and storage
The urea solution is clear, non-toxic and safe to handle. Since urea has corrosive impact on metals like aluminium, DEF is stored and transported in special containers. These containers are typically made of stainless steel. Vehicles' selective catalytic reduction (SCR) systems and DEF dispensers are designed in a manner that there is no corrosive impact of urea on them. It is recommended that DEF be stored in a cool, dry, and well-ventilated area that is out of direct sunlight. Bulk volumes of DEF are compatible for storage within polyethylene containers (HDPE, XLPE), fiberglass reinforced plastic (FRP), and steel tanks. DEF is also often handled in intermediate bulk containers for storage and shipping.
DEF is offered to consumers in a variety of quantities ranging from containers for single or repeated small usage, up to bulk carriers for consumers requiring a large amount of DEF. As of 2013, many truck stops have added DEF pumps. These are usually adjacent to fuel pumps so the driver can fill both tanks without moving the truck.
In Europe, increasing numbers of fuel stations offer dispensers that pump Diesel Exhaust Fluid rather than the traditional method of using disposable, single-use plastic containers. These pumped dispensers are often targeted at commercial vehicles but are now also starting to emerge as a solution for the growing number of passenger cars that require DEF by volume.
At airports, where DEF can sometimes be required for diesel ground service vehicles, its labelling and storage must be carefully managed to avoid accidentally servicing jet aircraft with DEF instead of fuel system icing inhibitor, a mistake that has been blamed for multiple in-flight engine failure and grounding incidents.
Supply shortage
South Korea
, a shortage of DEF in South Korea was continuing and brought havoc to its economy.
As most of the urea used is supplied by China, imports have slowed since China introduced mandatory inspections of urea exports in September.
Nearly 97% of South Korea's urea imports came from China between January and September. In 2015, South Korea had made it mandatory for diesel cars to use urea solutions to control emissions, a move that now impacts 40% of registered vehicles. Diesel vehicles made since 2015 were required to be fitted with SCR systems. The South Korean government started rationing urea solution, and banned its resale as panic buying by drivers exacerbated an acute shortage that could cause transport and industry to grind to a halt. A KC-330 Cygnus was sent to import Diesel exhaust fluid from Australia to ease a supply shortage of the key material used in diesel vehicles.
Australia
In early December 2021, the Australian National Road Transport Association also raised concerns about a shortage of DEF in the country due to the shortage of urea in China. China capped exports to protect its domestic supplies and rising DEF prices. By mid-December, there was approximately 7 weeks' supply of AdBlue left in Australia. On 14 December, Australian company IOR stated that it would build a new plant.
References
External links
ISO 22241-1:2019 Diesel engines — NOx reduction agent AUS 32 — Part 1: Quality requirements
Automotive technology tradenames
Diesel engine technology
Pollution control technologies
Air pollution control systems
NOx control
Solutions
See also
Rolling coal
Wet stacking, a term for when diesel engines exhaust unburned fuel, whether unintentionally or as part of rolling coal
Combustion vehicle ban | Diesel exhaust fluid | [
"Chemistry",
"Engineering"
] | 1,986 | [
"Homogeneous chemical mixtures",
"Solutions",
"Pollution control technologies",
"Environmental engineering"
] |
2,368,803 | https://en.wikipedia.org/wiki/Magnetic%20flow%20meter | A magnetic flow meter (mag meter, electromagnetic flow meter) is a transducer that measures fluid flow by the voltage induced across the liquid by its flow through a magnetic field. A magnetic field is applied to the metering tube, which results in a potential difference proportional to the flow velocity perpendicular to the flux lines. The physical principle at work is electromagnetic induction. The magnetic flow meter requires a conducting fluid, for example, water that contains ions, and an electrical insulating pipe surface, for example, a rubber-lined steel tube.
If the magnetic field direction were constant, electrochemical and other effects at the electrodes would make the potential difference
difficult to distinguish from the fluid flow induced potential difference. To show this in modern magnetic flowmeters, the magnetic field is constantly reversed, cancelling out the electrochemical potential difference, which does not change direction with the magnetic field. This however prevents the use of permanent magnets for magnetic flowmeters.
See also
Electromagnetic pump
Flow measurement
Magnetohydrodynamics
Water metering
External links
3D animation of the Electromagnetic Flow Measuring Principle
eFunda: Introduction to Magnetic Flowmeters
Principles of Electromagnetic Flow Measurement
eLearning course on electromagnetic flow meters
Flow meters
Electromagnetic components | Magnetic flow meter | [
"Chemistry",
"Technology",
"Engineering"
] | 246 | [
"Measuring instruments",
"Flow meters",
"Fluid dynamics"
] |
2,368,856 | https://en.wikipedia.org/wiki/Ultrasonic%20flow%20meter | An ultrasonic flow meter is a type of flow meter that measures the velocity of a fluid with ultrasound to calculate volume flow. Using ultrasonic transducers, the flow meter can measure the average velocity along the path of an emitted beam of ultrasound, by averaging the difference in measured transit time between the pulses of ultrasound propagating into and against the direction of the flow or by measuring the frequency shift from the Doppler effect. Ultrasonic flow meters are affected by the acoustic properties of the fluid and can be impacted by temperature, density, viscosity and suspended particulates depending on the exact flow meter. They vary greatly in purchase price but are often inexpensive to use and maintain because they do not use moving parts, unlike mechanical flow meters.
Means of operation
There are three different types of ultrasonic flow meters. Transmission (or contrapropagating transit-time) flow meters can be distinguished into in-line (intrusive, wetted) and clamp-on (non-intrusive) varieties. Ultrasonic flow meters that use the Doppler shift are called reflection or Doppler flow meters. The third type is the open-channel flow meter.
Principle
Time transit flow meter
Ultrasonic flow meters measure the difference between the transit time of ultrasonic pulses propagating with and against the flow direction. This time difference is a measure for the average velocity of the fluid along the path of the ultrasonic beam. By using the absolute transit times and , both the averaged fluid velocity and the speed of sound can be calculated. Using these two transit times, the distance between receiving and transmitting transducers and the inclination angle , if we assume that sound has to go against the flow when going upstream and along the flow when going downstream, then one can write the following two equations from the definition of velocity:
and
By adding and subtracting the above equations can solve for and ,
and
where is the average velocity of the fluid along the sound path and is the speed of sound.
Doppler shift flow meters
Another method in ultrasonic flow metering is the use of the Doppler shift that results from the reflection of an ultrasonic beam off sonically reflective materials, such as solid particles or entrained air bubbles in a flowing fluid, or the turbulence of the fluid itself, if the liquid is clean.
Doppler flowmeters are used for slurries, liquids with bubbles, gases with sound-reflecting particles.
This type of flow meter can also be used to measure the rate of blood flow, by passing an ultrasonic beam through the tissues, bouncing it off a reflective plate, then reversing the direction of the beam and repeating the measurement, the volume of blood flow can be estimated. The frequency of the transmitted beam is affected by the movement of blood in the vessel and by comparing the frequency of the upstream beam versus downstream the flow of blood through the vessel can be measured. The difference between the two frequencies is a measure of true volume flow. A wide-beam sensor can also be used to measure flow independent of the cross-sectional area of the blood vessel.
Open channel flow meters
In this case, the ultrasonic element is actually measuring the height of the water in the open channel; based on the geometry of the channel, the flow can be determined from the height. The ultrasonic sensor usually also has a temperature sensor with it because the speed of sound in air is affected by the temperature.
See also
Flow measurement
Magnetic flow meter
Turbine flow meter
References
Lipták, Béla G.: Process Measurement and Analysis, Volume 1. CRC Press (2003), (v. 1)
Ultrasonic Acoustic Sensing Brown University
Lynnworth, L.C.: Ultrasonic Measurements for Process Control. Academic Press, Inc. San Diego.
External links
Doppler Shift for Sound and Light at MathPages
The Doppler Effect and Sonic Booms (D.A. Russell, Kettering University)
Advantages of Ultrasonic Flowmeters
Ultrasound
Flow meters | Ultrasonic flow meter | [
"Chemistry",
"Technology",
"Engineering"
] | 812 | [
"Measuring instruments",
"Flow meters",
"Fluid dynamics"
] |
2,369,672 | https://en.wikipedia.org/wiki/Air%20handler | An air handler, or air handling unit (often abbreviated to AHU), is a device used to regulate and circulate air as part of a heating, ventilating, and air-conditioning (HVAC) system. An air handler is usually a large metal box containing a blower, furnace or A/C elements, filter racks or chambers, sound attenuators, and dampers. Air handlers usually connect to a ductwork ventilation system that distributes the conditioned air through the building and returns it to the AHU, sometimes exhausting air to the atmosphere and bringing in fresh air. Sometimes AHUs discharge (supply) and admit (return) air directly to and from the space served without ductwork
Small air handlers, for local use, are called terminal units, and may only include an air filter, coil, and blower; these simple terminal units are called blower coils or fan coil units. A larger air handler that conditions 100% outside air, and no recirculated air, is known as a makeup air unit (MAU) or fresh air handling unit (FAHU). An air handler designed for outdoor use, typically on roofs, is known as a packaged unit (PU), heating and air conditioning unit (HCU), or rooftop unit (RTU).
Construction
The air handler is normally constructed around a framing system with metal infill panels as required to suit the configuration of the components. In its simplest form the frame may be made from metal channels or sections, with single skin metal infill panels. The metalwork is normally galvanized for long term protection. For outdoor units some form of weatherproof lid and additional sealing around joints is provided.
Larger air handlers will be manufactured from a square section steel framing system with double skinned and insulated infill panels. Such constructions reduce heat loss or heat gain from the air handler, as well as providing acoustic attenuation. Larger air handlers may be several meters long and are manufactured in a sectional manner and therefore, for strength and rigidity, steel section base rails are provided under the unit.
Where supply and extract air is required in equal proportions for a balanced ventilation system, it is common for the supply and extract air handlers to be joined together, either in a side-by-side or a stacked configuration.
Air handling units types
There are six factors for air handlers classifications and determine types of them, based on:
Application (air handling unit usage)
Air flow control (CAV or VAV air handlers)
Zone control (single zone or multi zone air handlers)
Fan location (draw-through or blow-through)
Direction of outlet air flow (front, up, or down)
Package model (horizontal or vertical)
But, the first method is very usual in HVAC market. In fact, most of the company advertise their products by air handling unit applications:
Normal
Hygienic
Ceiling mounted
Components
The major types of components are described here in approximate order, from the return duct (input to the AHU), through the unit, to the supply duct (AHU output).
Filters
Air filtration is almost always present in order to provide clean dust-free air to the building occupants. It may be via simple low-MERV pleated media, HEPA, electrostatic, or a combination of techniques. Gas-phase and ultraviolet air treatments may be employed as well.
Filtration is typically placed first in the AHU in order to keep all the downstream components clean. Depending upon the grade of filtration required, typically filters will be arranged in two (or more) successive banks with a coarse-grade panel filter provided in front of a fine-grade bag filter, or other "final" filtration medium. The panel filter is cheaper to replace and maintain, and thus protects the more expensive bag filters.
The life of a filter may be assessed by monitoring the pressure drop through the filter medium at design air volume flow rate. This may be done by means of a visual display using a pressure gauge, or by a pressure switch linked to an alarm point on the building control system. Failure to replace a filter may eventually lead to its collapse, as the forces exerted upon it by the fan overcome its inherent strength, resulting in collapse and thus contamination of the air handler and downstream ductwork.
Hot (heat A.K.A furnace) and cold (air conditioning) elements
Air handlers may need to provide hot air, cold air, or both to change the supply air temperature, and humidity level depending on the location and the application. Such conditioning is provided by heat exchanger coils within the air handling unit air stream, such coils may be direct or indirect in relation to the medium providing the heating or cooling effect.
Direct heat exchangers include those for gas-fired fuel-burning heaters or a refrigeration evaporator, placed directly in the air stream. Electric resistance heaters and heat pumps can be used as well. Evaporative cooling is possible in dry climates.
Indirect coils use hot water or steam for heating, and chilled water or glycol for cooling (prime energy for heating and air conditioning is provided by central plant elsewhere in the building). Coils are typically manufactured from copper for the tubes, with copper or aluminum fins to aid heat transfer. Cooling coils will also employ eliminator plates to remove and drain condensate. The hot water or steam is provided by a central boiler, and the chilled water is provided by a central chiller. Downstream temperature sensors are typically used to monitor and control "off coil" temperatures, in conjunction with an appropriate motorized control valve prior to the coil.
If dehumidification is required, then the cooling coil is employed to over-cool so that the dew point is reached and condensation occurs. A heater coil placed after the cooling coil re-heats the air (therefore known as a re-heat coil) to the desired supply temperature. This process has the effect of reducing the relative humidity level of the supply air.
In colder climates, where winter temperatures regularly drop below freezing, then frost coils or pre-heat coils are often employed as a first stage of air treatment to ensure that downstream filters or chilled water coils are protected against freezing. The control of the frost coil is such that if a certain off-coil air temperature is not reached then the entire air handler is shut down for protection.
Humidifier
Humidification is often necessary in colder climates where continuous heating will make the air drier, resulting in uncomfortable air quality and increased static electricity. Various types of humidification may be used:
Evaporative: dry air blown over a reservoir will evaporate some of the water. The rate of evaporation can be increased by spraying the water onto baffles in the air stream.
Vaporizer: steam or vapor from a boiler is blown directly into the air stream.
Spray mist: water is diffused either by a nozzle or other mechanical means into fine droplets and carried by the air.
Ultrasonic: A tray of fresh water in the airstream is excited by an ultrasonic device forming a fog or water mist.
Wetted medium: A fine fibrous medium in the airstream is kept moist with fresh water from a header pipe with a series of small outlets. As the air passes through the medium it entrains the water in fine droplets. This type of humidifier can quickly clog if the primary air filtration is not maintained in good order.
Mixing chamber
In order to maintain indoor air quality, air handlers commonly have provisions to allow the introduction of outside air into, and the exhausting of air from the building. In temperate climates, mixing the right amount of cooler outside air with warmer return air can be used to approach the desired supply air temperature. A mixing chamber is therefore used which has dampers controlling the ratio between the return, outside, and exhaust air.
Blower/fan
Air handlers typically employ a large squirrel cage blower driven by an AC induction electric motor to move the air. The blower may operate at a single speed, offer a variety of set speeds, or be driven by a variable-frequency drive to allow a wide range of air flow rates. Flow rate may also be controlled by inlet vanes or outlet dampers on the fan. Some residential air handlers in USA (central "furnaces" or "air conditioners") use a brushless DC electric motor that has variable speed capabilities. Air handlers in Europe and Australia and New Zealand now commonly use backward curve fans without scroll or "plug fans". These are driven using high efficiency EC (electronically commutated) motors with built in speed control. The higher the RTU temperature, the slower the air will flow. And the lower the RTU temperature, the faster the air will flow.
Multiple blowers may be present in large commercial air handling units, typically placed at the end of the AHU and the beginning of the supply ductwork (therefore also called "supply fans"). They are often augmented by fans in the return air duct ("return fans") pushing the air into the AHU.
Balancing
Un-balanced fans wobble and vibrate. For home AC fans, this can be a major problem: air circulation is greatly reduced at the vents (as wobble is lost energy), efficiency is compromised, and noise is increased. Another major problem in fans that are not balanced is longevity of the bearings (attached to the fan and shaft) is compromised. This can cause failure to occur long before the bearings life expectancy.
Weights can be strategically placed to correct for a smooth spin (for a ceiling fan, trial and error placement typically resolves the problem). Home/central AC fans or other big fans are typically taken to shops, which have special balancers for more complicated balancing (trial and error can cause damage before the correct points are found). The fan motor itself does not typically vibrate.
Heat recovery device
A heat recovery device heat exchanger may be fitted to the air handler between supply and extract airstreams for energy savings and increasing capacity. These types more commonly include for:
Recuperator, or Plate Heat exchanger: A sandwich of plastic or metal plates with interlaced air paths. Heat is transferred between airstreams from one side of the plate to the other. The plates are typically spaced at 4 to 6mm apart. Heat recovery efficiency up to 70%.
Thermal wheel, or Rotary heat exchanger: A slowly rotating matrix of finely corrugated metal, operating in both opposing airstreams. When the air handling unit is in heating mode, heat is absorbed as air passes through the matrix in the exhaust airstream, during one half rotation, and released during the second half rotation into the supply airstream in a continuous process. When the air handling unit is in cooling mode, heat is released as air passes through the matrix in the exhaust airstream, during one half rotation, and absorbed during the second half rotation into the supply airstream. Heat recovery efficiency up to 85%. Wheels are also available with a hygroscopic coating to provide latent heat transfer and also the drying or humidification of airstreams.
Run around coil: Two air to liquid heat exchanger coils, in opposing airstreams, piped together with a circulating pump and using water or a brine as the heat transfer medium. This device, although not very efficient, allows heat recovery between remote and sometimes multiple supply and exhaust airstreams. Heat recovery efficiency up to 50%.
Heat pipe: Operating in both opposing air paths, using a confined refrigerant as a heat transfer medium. The heat pipe uses multiple sealed pipes mounted in a coil configuration with fins to increase heat transfer. Heat is absorbed on one side of the pipe, by evaporation of the refrigerant, and released at the other side, by condensation of the refrigerant. Condensed refrigerant flows by gravity to the first side of the pipe to repeat the process. Heat recovery efficiency up to 65%.
Controls
Controls are necessary to regulate every aspect of an air handler, such as: flow rate of air, supply air temperature, mixed air temperature, humidity, air quality. They may be as simple as an off/on thermostat or as complex as a building automation system using BACnet or LonWorks, for example.
Common control components include temperature sensors, humidity sensors, sail switches, actuators, motors, and controllers.
Vibration isolators
The blowers in an air handler can create substantial vibration and the large area of the duct system would transmit this noise and vibration to the occupants of the building. To avoid this, vibration isolators (flexible sections) are normally inserted into the duct immediately before and after the air handler and often also between the fan compartment and the rest of the AHU. The rubberized canvas-like material of these sections allows the air handler components to vibrate without transmitting this motion to the attached ducts.
The fan compartment can be further isolated by placing it on spring suspension, neoprene pads, or hung on spring hangers, which will mitigate the transfer of vibration through the structure.
Sound attenuators
The blower in the air handler also generates noise, which should be attenuated before ductwork enters a noise-sensitive room. To achieve meaningful noise reduction in a relatively short length, a sound attenuator is used. The attenuator is a specialty duct accessory that typically consists of an inner perforated baffle with sound-absorptive insulation. Sound attenuators may take the place of ductwork; conversely, inline attenuators are located close to the blower and have a bellmouth profile to minimize system effects.
Major manufacturers
AAON
Carrier Corporation (also makes Bryant and Payne brands)
CIAT Group
Daikin Industries (also makes McQuay International, Goodman, and Airfel brands)
Johnson Controls (also makes York International brand)
Lennox International
Rheem (also makes Ruud)
Trane
Vertiv
See also
HVAC
Indoor air quality
Thermal comfort
References
Heating, ventilation, and air conditioning
Mechanical engineering | Air handler | [
"Physics",
"Engineering"
] | 2,905 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
2,369,798 | https://en.wikipedia.org/wiki/Hadamard%20factorization%20theorem | In mathematics, and particularly in the field of complex analysis, the Hadamard factorization theorem asserts that every entire function with finite order can be represented as a product involving its zeroes and an exponential of a polynomial. It is named for Jacques Hadamard.
The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root. It is closely related to Weierstrass factorization theorem, which does not restrict to entire functions with finite orders.
Formal statement
Define the Hadamard canonical factors Entire functions of finite order have Hadamard's canonical representation:where are those roots of that are not zero (), is the order of the zero of at (the case being taken to mean ), a polynomial (whose degree we shall call ), and is the smallest non-negative integer such that the seriesconverges. The non-negative integer is called the genus of the entire function . In this notation,In other words: If the order is not an integer, then is the integer part of . If the order is a positive integer, then there are two possibilities: or .
Furthermore, Jensen's inequality implies that its roots are distributed sparsely, with critical exponent .
For example, , and are entire functions of genus .
Critical exponent
Define the critical exponent of the roots of as the following:where is the number of roots with modulus . In other words, we have an asymptotic bound on the growth behavior of the number of roots of the function:It's clear that .
Theorem: If is an entire function with infinitely many roots, thenNote: These two equalities are purely about the limit behaviors of a real number sequence that diverges to infinity. It does not involve complex analysis.
Proposition: , by Jensen's formula.
Proof
Since is also an entire function with the same order and genus, we can wlog assume .
If has only finitely many roots, then with the function of order . Thus by an application of the Borel–Carathéodory theorem, is a polynomial of degree , and so we have .
Otherwise, has infinitely many roots. This is the tricky part and requires splitting into two cases. First show that , then show that .
Define the function where . We will study the behavior of .
Bounds on the behaviour of
In the proof, we need four bounds on :
For any , when .
For any , there exists such that when .
For any , there exists such that when .
for all , and as .
These are essentially proved in the similar way. As an example, we prove the fourth one.where is an entire function. Since it is entire, for any , it is bounded in . So inside .
Outside , we have
is well-defined
Source:
For any , we show that the sum converges uniformly over .
Since only finitely many , we can split the sum to a finite bulk and an infinite tail:The bulk term is a finite sum, so it converges uniformly. It remains to bound the tail term.
By bound (1) on , . So if is large enough, for some ,Since , the last sum is finite.
As usual in analysis, we fix some small .
Then the goal is to show that is of order . This does not exactly work, however, due to bad behavior of near . Consequently, we need to pepper the complex plane with "forbidden disks", one around each , each with radius . Then since by the previous result on , we can pick an increasing sequence of radii that diverge to infinity, such that each circle avoids all these forbidden disks.
Thus, if we can prove a bound of form for all large that avoids these forbidden disks, then by the same application of Borel–Carathéodory theorem, for any , and so as we take , we obtain .
Since by the definition of , it remains to show that , that is, there exists some constant such that for all large that avoids these forbidden disks.
As usual in analysis, this infinite sum can be split into two parts: a finite bulk and an infinite tail term, each of which is to be separately handled. There are finitely many with modulus and infinitely many with modulus . So we have to bound:The upper-bounding can be accomplished by the bounds (2), (3) on , and the assumption that is outside every forbidden disk. Details are found in.
This is a corollary of the following:If has genus , then .
Split the sum to three parts: The first two terms are . The third term is bounded by bound (4) of : By assumption, , so . Hence the above sum is
Applications
With Hadamard factorization we can prove some special cases of Picard's little theorem.
Theorem: If is entire, nonconstant, and has finite order, then it assumes either the whole complex plane or the plane minus a single point.
Proof: If does not assume value , then by Hadamard factorization, for a nonconstant polynomial . By the fundamental theorem of algebra, assumes all values, so assumes all nonzero values.
Theorem: If is entire, nonconstant, and has finite, non-integer order , then it assumes the whole complex plane infinitely many times.
Proof: For any , it suffices to prove has infinitely many roots. Expand to its Hadamard representation . If the product is finite, then is an integer.
References
Notes
Theorems in complex analysis | Hadamard factorization theorem | [
"Mathematics"
] | 1,136 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
2,369,850 | https://en.wikipedia.org/wiki/Bochner%27s%20theorem | In mathematics, Bochner's theorem (named for Salomon Bochner) characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's theorem asserts that under Fourier transform a continuous positive-definite function on a locally compact abelian group corresponds to a finite positive measure on the Pontryagin dual group. The case of sequences was first established by Gustav Herglotz (see also the related Herglotz representation theorem.)
The theorem for locally compact abelian groups
Bochner's theorem for a locally compact abelian group , with dual group , says the following:
Theorem For any normalized continuous positive-definite function on (normalization here means that is 1 at the unit of ), there exists a unique probability measure on such that
i.e. is the Fourier transform of a unique probability measure on . Conversely, the Fourier transform of a probability measure on is necessarily a normalized continuous positive-definite function on . This is in fact a one-to-one correspondence.
The Gelfand–Fourier transform is an isomorphism between the group C*-algebra and . The theorem is essentially the dual statement for states of the two abelian C*-algebras.
The proof of the theorem passes through vector states on strongly continuous unitary representations of (the proof in fact shows that every normalized continuous positive-definite function must be of this form).
Given a normalized continuous positive-definite function on , one can construct a strongly continuous unitary representation of in a natural way: Let be the family of complex-valued functions on with finite support, i.e. for all but finitely many . The positive-definite kernel induces a (possibly degenerate) inner product on . Quotienting out degeneracy and taking the completion gives a Hilbert space
whose typical element is an equivalence class . For a fixed in , the "shift operator" defined by , for a representative of , is unitary. So the map
is a unitary representations of on . By continuity of , it is weakly continuous, therefore strongly continuous. By construction, we have
where is the class of the function that is 1 on the identity of and zero elsewhere. But by Gelfand–Fourier isomorphism, the vector state on is the pull-back of a state on , which is necessarily integration against a probability measure . Chasing through the isomorphisms then gives
On the other hand, given a probability measure on , the function
is a normalized continuous positive-definite function. Continuity of follows from the dominated convergence theorem. For positive-definiteness, take a nondegenerate representation of . This extends uniquely to a representation of its multiplier algebra and therefore a strongly continuous unitary representation . As above we have given by some vector state on
therefore positive-definite.
The two constructions are mutual inverses.
Special cases
Bochner's theorem in the special case of the discrete group is often referred to as Herglotz's theorem (see Herglotz representation theorem) and says that a function on with is positive-definite if and only if there exists a probability measure on the circle such that
Similarly, a continuous function on with is positive-definite if and only if there exists a probability measure on such that
Applications
In statistics, Bochner's theorem can be used to describe the serial correlation of certain type of time series. A sequence of random variables of mean 0 is a (wide-sense) stationary time series if the covariance
only depends on . The function
is called the autocovariance function of the time series. By the mean zero assumption,
where denotes the inner product on the Hilbert space of random variables with finite second moments. It is then immediate that is a positive-definite function on the integers . By Bochner's theorem, there exists a unique positive measure on such that
This measure is called the spectral measure of the time series. It yields information about the "seasonal trends" of the series.
For example, let be an -th root of unity (with the current identification, this is ) and be a random variable of mean 0 and variance 1. Consider the time series . The autocovariance function is
Evidently, the corresponding spectral measure is the Dirac point mass centered at . This is related to the fact that the time series repeats itself every periods.
When has sufficiently fast decay, the measure is absolutely continuous with respect to the Lebesgue measure, and its Radon–Nikodym derivative is called the spectral density of the time series. When lies in , is the Fourier transform of .
See also
Bochner-Minlos theorem
Characteristic function (probability theory)
Positive-definite function on a group
Notes
References
M. Reed and Barry Simon, Methods of Modern Mathematical Physics, vol. II, Academic Press, 1975.
Theorems in harmonic analysis
Theorems in measure theory
Theorems in functional analysis
Theorems in Fourier analysis
Theorems in statistics | Bochner's theorem | [
"Mathematics"
] | 1,005 | [
"Theorems in mathematical analysis",
"Theorems in statistics",
"Theorems in measure theory",
"Theorems in functional analysis",
"Theorems in harmonic analysis",
"Mathematical problems",
"Mathematical theorems"
] |
2,369,853 | https://en.wikipedia.org/wiki/Michelson%E2%80%93Gale%E2%80%93Pearson%20experiment | The Michelson–Gale–Pearson experiment (1925) is a modified version of the Michelson–Morley experiment and the Sagnac-Interferometer. It measured the Sagnac effect due to Earth's rotation, and thus tests the theories of special relativity and luminiferous ether along the rotating frame of Earth.
Experiment
The aim, as it was first proposed by Albert A. Michelson in 1904 and then executed in 1925 by Michelson and Henry G. Gale, was to find out whether the rotation of the Earth has an effect on the propagation of light in the vicinity of the Earth.
The Michelson-Gale experiment was a very large ring interferometer, (a perimeter of 1.9 kilometers), large enough to detect the angular velocity of the Earth. Like the original Michelson-Morley experiment, the Michelson-Gale-Pearson version compared the light from a single source (carbon arc) after travelling in two directions. The major change was to replace the two "arms" of the original MM version with two rectangles, one much larger than the other. Light was sent into the rectangles, reflecting off mirrors at the corners, and returned to the starting point. Light exiting the two rectangles was compared on a screen just as the light returning from the two arms would be in a standard MM experiment. The expected fringe shift in accordance with the stationary aether and special relativity was given by Michelson as:
where is the displacement in fringes, the area of the ring, the latitude of the experiment site in Clearing, Illinois (41° 46'), the speed of light, the angular velocity of Earth, the effective wavelength used. In other words, this experiment was aimed to detect the Sagnac effect due to Earth's rotation.
Result
The outcome of the experiment was that the angular velocity of the Earth as measured by astronomy was confirmed to within measuring accuracy. The ring interferometer of the Michelson-Gale experiment was not calibrated by comparison with an outside reference (which was not possible, because the setup was fixed to the Earth). From its design it could be deduced where the central interference fringe ought to be if there would be zero shift. The measured shift was 230 parts in 1000, with an accuracy of 5 parts in 1000. The predicted shift was 237 parts in 1000. According to Michelson/Gale, the experiment is compatible with both the idea of a stationary ether and special relativity.
As it was already pointed out by Michelson in 1904, a positive result in such experiments contradicts the hypothesis of complete aether drag, as the spinning surface of the Earth experiences an aether wind. The Michelson-Morley experiment shows on the contrary that a hypothetical aether could not be moving relative to the Earth, that is, as the Earth orbits it would have to drag the aether along. Those two results are not incompatible per se, but in the absence of a model to reconcile them, they are more ad hoc than the explanation of both experiments within special relativity. The experiment is consistent with relativity for the same reason as all other Sagnac type experiments (see Sagnac effect). That is, rotation is absolute in special relativity, because there is no inertial frame of reference in which the whole device is at rest during the complete process of rotation, thus the light paths of the two rays are different in all of those frames, consequently a positive result must occur. It's also possible to define rotating frames in special relativity (Born coordinates), yet in those frames the speed of light is not constant in extended areas any more, thus also in this view a positive result must occur. Today, Sagnac type effects due to Earth's rotation are routinely incorporated into GPS.
References
Physics experiments
Aether theories
1925 in science | Michelson–Gale–Pearson experiment | [
"Physics"
] | 778 | [
"Experimental physics",
"Physics experiments"
] |
2,370,396 | https://en.wikipedia.org/wiki/Glucagon-like%20peptide-2 | Glucagon-like peptide-2 (GLP-2) is a 33 amino acid peptide with the sequence HADGSFSDEMNTILDNLAARDFINWLIQTKITD (see Proteinogenic amino acid) in humans. GLP-2 is created by specific post-translational proteolytic cleavage of proglucagon in a process that also liberates the related glucagon-like peptide-1 (GLP-1). GLP-2 is produced by the intestinal endocrine L cell and by various neurons in the central nervous system. Intestinal GLP-2 is co-secreted along with GLP-1 upon nutrient ingestion.
When externally administered, GLP-2 produces a number of effects in humans and rodents, including intestinal growth, enhancement of intestinal function, reduction in bone breakdown and neuroprotection. GLP-2 may act in an endocrine fashion to link intestinal growth and metabolism with nutrient intake. GLP-2 and related analogs may be treatments for short bowel syndrome, Crohn's disease, osteoporosis and as adjuvant therapy during cancer chemotherapy.
GLP-2 has an antidepressant effect in a mouse model of depression when delivered via intracerebroventricular injection. However, a GLP-2 derivative (PAS-CPP-GLP-2) was shown to be efficiently delivered to the brain intranasally, with similar efficacy.
See also
Glucagon-like peptide 2 receptor
References
External links
Biomolecules | Glucagon-like peptide-2 | [
"Chemistry",
"Biology"
] | 334 | [
"Natural products",
"Organic compounds",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Molecular biology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.