id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,503,750
https://en.wikipedia.org/wiki/Rolling%20resistance
Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body (such as a ball, tire, or wheel) rolls on a surface. It is mainly caused by non-elastic effects; that is, not all the energy needed for deformation (or movement) of the wheel, roadbed, etc., is recovered when the pressure is removed. Two forms of this are hysteresis losses (see below), and permanent (plastic) deformation of the object or the surface (e.g. soil). Note that the slippage between the wheel and the surface also results in energy dissipation. Although some researchers have included this term in rolling resistance, some suggest that this dissipation term should be treated separately from rolling resistance because it is due to the applied torque to the wheel and the resultant slip between the wheel and ground, which is called slip loss or slip resistance. In addition, only the so-called slip resistance involves friction, therefore the name "rolling friction" is to an extent a misnomer. Analogous with sliding friction, rolling resistance is often expressed as a coefficient times the normal force. This coefficient of rolling resistance is generally much smaller than the coefficient of sliding friction. Any coasting wheeled vehicle will gradually slow down due to rolling resistance including that of the bearings, but a train car with steel wheels running on steel rails will roll farther than a bus of the same mass with rubber tires running on tarmac/asphalt. Factors that contribute to rolling resistance are the (amount of) deformation of the wheels, the deformation of the roadbed surface, and movement below the surface. Additional contributing factors include wheel diameter, load on wheel, surface adhesion, sliding, and relative micro-sliding between the surfaces of contact. The losses due to hysteresis also depend strongly on the material properties of the wheel or tire and the surface. For example, a rubber tire will have higher rolling resistance on a paved road than a steel railroad wheel on a steel rail. Also, sand on the ground will give more rolling resistance than concrete. Soil rolling resistance factor is not dependent on speed. Primary cause The primary cause of pneumatic tire rolling resistance is hysteresis: A characteristic of a deformable material such that the energy of deformation is greater than the energy of recovery. The rubber compound in a tire exhibits hysteresis. As the tire rotates under the weight of the vehicle, it experiences repeated cycles of deformation and recovery, and it dissipates the hysteresis energy loss as heat. Hysteresis is the main cause of energy loss associated with rolling resistance and is attributed to the viscoelastic characteristics of the rubber. — National Academy of Sciences This main principle is illustrated in the figure of the rolling cylinders. If two equal cylinders are pressed together then the contact surface is flat. In the absence of surface friction, contact stresses are normal (i.e. perpendicular) to the contact surface. Consider a particle that enters the contact area at the right side, travels through the contact patch and leaves at the left side. Initially its vertical deformation is increasing, which is resisted by the hysteresis effect. Therefore, an additional pressure is generated to avoid interpenetration of the two surfaces. Later its vertical deformation is decreasing. This is again resisted by the hysteresis effect. In this case this decreases the pressure that is needed to keep the two bodies separate. The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion. Materials that have a large hysteresis effect, such as rubber, which bounce back slowly, exhibit more rolling resistance than materials with a small hysteresis effect that bounce back more quickly and more completely, such as steel or silica. Low rolling resistance tires typically incorporate silica in place of carbon black in their tread compounds to reduce low-frequency hysteresis without compromising traction. Note that railroads also have hysteresis in the roadbed structure. Definitions In the broad sense, specific "rolling resistance" (for vehicles) is the force per unit vehicle weight required to move the vehicle on level ground at a constant slow speed where aerodynamic drag (air resistance) is insignificant and also where there are no traction (motor) forces or brakes applied. In other words, the vehicle would be coasting if it were not for the force to maintain constant speed. This broad sense includes wheel bearing resistance, the energy dissipated by vibration and oscillation of both the roadbed and the vehicle, and sliding of the wheel on the roadbed surface (pavement or a rail). But there is an even broader sense that would include energy wasted by wheel slippage due to the torque applied from the engine. This includes the increased power required due to the increased velocity of the wheels where the tangential velocity of the driving wheel(s) becomes greater than the vehicle speed due to slippage. Since power is equal to force times velocity and the wheel velocity has increased, the power required has increased accordingly. The pure "rolling resistance" for a train is that which happens due to deformation and possible minor sliding at the wheel-road contact. For a rubber tire, an analogous energy loss happens over the entire tire, but it is still called "rolling resistance". In the broad sense, "rolling resistance" includes wheel bearing resistance, energy loss by shaking both the roadbed (and the earth underneath) and the vehicle itself, and by sliding of the wheel, road/rail contact. Railroad textbooks seem to cover all these resistance forces but do not call their sum "rolling resistance" (broad sense) as is done in this article. They just sum up all the resistance forces (including aerodynamic drag) and call the sum basic train resistance (or the like). Since railroad rolling resistance in the broad sense may be a few times larger than just the pure rolling resistance reported values may be in serious conflict since they may be based on different definitions of "rolling resistance". The train's engines must, of course, provide the energy to overcome this broad-sense rolling resistance. For tires, rolling resistance is defined as the energy consumed by a tire per unit distance covered. It is also called rolling friction or rolling drag. It is one of the forces that act to oppose the motion of a driver. The main reason for this is that when the tires are in motion and touch the surface, the surface changes shape and causes deformation of the tire. For highway motor vehicles, there is some energy dissipated in shaking the roadway (and the earth beneath it), the shaking of the vehicle itself, and the sliding of the tires. But, other than the additional power required due to torque and wheel bearing friction, non-pure rolling resistance doesn't seem to have been investigated, possibly because the "pure" rolling resistance of a rubber tire is several times higher than the neglected resistances. Rolling resistance coefficient The "rolling resistance coefficient" is defined by the following equation: where is the rolling resistance force (shown as in figure 1), is the dimensionless rolling resistance coefficient or coefficient of rolling friction (CRF), and is the normal force, the force perpendicular to the surface on which the wheel is rolling. is the force needed to push (or tow) a wheeled vehicle forward (at constant speed on a level surface, or zero grade, with zero air resistance) per unit force of weight. It is assumed that all wheels are the same and bear identical weight. Thus: means that it would only take 0.01 pounds to tow a vehicle weighing one pound. For a 1000-pound vehicle, it would take 1000 times more tow force, i.e. 10 pounds. One could say that is in lb(tow-force)/lb(vehicle weight). Since this lb/lb is force divided by force, is dimensionless. Multiply it by 100 and you get the percent (%) of the weight of the vehicle required to maintain slow steady speed. is often multiplied by 1000 to get the parts per thousand, which is the same as kilograms (kg force) per metric ton (tonne = 1000 kg ), which is the same as pounds of resistance per 1000 pounds of load or Newtons/kilo-Newton, etc. For the US railroads, lb/ton has been traditionally used; this is just . Thus, they are all just measures of resistance per unit vehicle weight. While they are all "specific resistances", sometimes they are just called "resistance" although they are really a coefficient (ratio)or a multiple thereof. If using pounds or kilograms as force units, mass is equal to weight (in earth's gravity a kilogram a mass weighs a kilogram and exerts a kilogram of force) so one could claim that is also the force per unit mass in such units. The SI system would use N/tonne (N/T, N/t), which is and is force per unit mass, where g is the acceleration of gravity in SI units (meters per second square). The above shows resistance proportional to but does not explicitly show any variation with speed, loads, torque, surface roughness, diameter, tire inflation/wear, etc., because itself varies with those factors. It might seem from the above definition of that the rolling resistance is directly proportional to vehicle weight but it is not. Measurement There are at least two popular models for calculating rolling resistance. "Rolling resistance coefficient (RRC). The value of the rolling resistance force divided by the wheel load. The Society of Automotive Engineers (SAE) has developed test practices to measure the RRC of tires. These tests (SAE J1269 and SAE J2452) are usually performed on new tires. When measured by using these standard test practices, most new passenger tires have reported RRCs ranging from 0.007 to 0.014." In the case of bicycle tires, values of 0.0025 to 0.005 are achieved. These coefficients are measured on rollers, with power meters on road surfaces, or with coast-down tests. In the latter two cases, the effect of air resistance must be subtracted or the tests performed at very low speeds. The coefficient of rolling resistance b, which has the dimension of length, is approximately (due to the small-angle approximation of ) equal to the value of the rolling resistance force times the radius of the wheel divided by the wheel load. ISO 18164:2005 is used to test rolling resistance in Europe. The results of these tests can be hard for the general public to obtain as manufacturers prefer to publicize "comfort" and "performance". Physical formulae The coefficient of rolling resistance for a slow rigid wheel on a perfectly elastic surface, not adjusted for velocity, can be calculated by where is the sinkage depth is the diameter of the rigid wheel The empirical formula for for cast iron mine car wheels on steel rails is: where is the wheel diameter in inches is the load on the wheel in pounds-force As an alternative to using one can use , which is a different rolling resistance coefficient or coefficient of rolling friction with dimension of length. It is defined by the following formula: where is the rolling resistance force (shown in figure 1), is the wheel radius, is the rolling resistance coefficient or coefficient of rolling friction with dimension of length, and is the normal force (equal to W, not R, as shown in figure 1). The above equation, where resistance is inversely proportional to radius seems to be based on the discredited "Coulomb's law" (Neither Coulomb's inverse square law nor Coulomb's law of friction). See dependence on diameter. Equating this equation with the force per the rolling resistance coefficient, and solving for , gives = . Therefore, if a source gives rolling resistance coefficient () as a dimensionless coefficient, it can be converted to , having units of length, by multiplying by wheel radius . Rolling resistance coefficient examples Table of rolling resistance coefficient examples: For example, in earth gravity, a car of 1000 kg on asphalt will need a force of around 100 newtons for rolling (1000 kg × 9.81 m/s2 × 0.01 = 98.1 N). Dependence on diameter Stagecoaches and railroads According to Dupuit (1837), rolling resistance (of wheeled carriages with wooden wheels with iron tires) is approximately inversely proportional to the square root of wheel diameter. This rule has been experimentally verified for cast iron wheels (8″ - 24″ diameter) on steel rail and for 19th century carriage wheels. But there are other tests on carriage wheels that do not agree. Theory of a cylinder rolling on an elastic roadway also gives this same rule These contradict earlier (1785) tests by Coulomb of rolling wooden cylinders where Coulomb reported that rolling resistance was inversely proportional to the diameter of the wheel (known as "Coulomb's law"). This disputed (or wrongly applied) -"Coulomb's law" is still found in handbooks, however. Pneumatic tires For pneumatic tires on hard pavement, it is reported that the effect of diameter on rolling resistance is negligible (within a practical range of diameters). Dependence on applied torque The driving torque to overcome rolling resistance and maintain steady speed on level ground (with no air resistance) can be calculated by: where is the linear speed of the body (at the axle), and its rotational speed. It is noteworthy that is usually not equal to the radius of the rolling body as a result of wheel slip. The slip between wheel and ground inevitably occurs whenever a driving or braking torque is applied to the wheel. Consequently, the linear speed of the vehicle differs from the wheel's circumferential speed. It is notable that slip does not occur in driven wheels, which are not subjected to driving torque, under different conditions except braking. Therefore, rolling resistance, namely hysteresis loss, is the main source of energy dissipation in driven wheels or axles, whereas in the drive wheels and axles slip resistance, namely loss due to wheel slip, plays the role as well as rolling resistance. Significance of rolling or slip resistance is largely dependent on the tractive force, coefficient of friction, normal load, etc. All wheels "Applied torque" may either be driving torque applied by a motor (often through a transmission) or a braking torque applied by brakes (including regenerative braking). Such torques results in energy dissipation (above that due to the basic rolling resistance of a freely rolling, i.e. except slip resistance). This additional loss is in part due to the fact that there is some slipping of the wheel, and for pneumatic tires, there is more flexing of the sidewalls due to the torque. Slip is defined such that a 2% slip means that the circumferential speed of the driving wheel exceeds the speed of the vehicle by 2%. A small percentage slip can result in a slip resistance which is much larger than the basic rolling resistance. For example, for pneumatic tires, a 5% slip can translate into a 200% increase in rolling resistance. This is partly because the tractive force applied during this slip is many times greater than the rolling resistance force and thus much more power per unit velocity is being applied (recall power = force x velocity so that power per unit of velocity is just force). So just a small percentage increase in circumferential velocity due to slip can translate into a loss of traction power which may even exceed the power loss due to basic (ordinary) rolling resistance. For railroads, this effect may be even more pronounced due to the low rolling resistance of steel wheels. It is shown that for a passenger car, when the tractive force is about 40% of the maximum traction, the slip resistance is almost equal to the basic rolling resistance (hysteresis loss). But in case of a tractive force equal to 70% of the maximum traction, slip resistance becomes 10 times larger than the basic rolling resistance. Railroad steel wheels In order to apply any traction to the wheels, some slippage of the wheel is required. For trains climbing up a grade, this slip is normally 1.5% to 2.5%. Slip (also known as creep) is normally roughly directly proportional to tractive effort. An exception is if the tractive effort is so high that the wheel is close to substantial slipping (more than just a few percent as discussed above), then slip rapidly increases with tractive effort and is no longer linear. With a little higher applied tractive effort the wheel spins out of control and the adhesion drops resulting in the wheel spinning even faster. This is the type of slipping that is observable by eye—the slip of say 2% for traction is only observed by instruments. Such rapid slip may result in excessive wear or damage. Pneumatic tires Rolling resistance greatly increases with applied torque. At high torques, which apply a tangential force to the road of about half the weight of the vehicle, the rolling resistance may triple (a 200% increase). This is in part due to a slip of about 5%. The rolling resistance increase with applied torque is not linear, but increases at a faster rate as the torque becomes higher. Dependence on wheel load Railroad steel wheels The rolling resistance coefficient, Crr, significantly decreases as the weight of the rail car per wheel increases. For example, an empty freight car had about twice the Crr as a loaded car (Crr=0.002 vs. Crr=0.001). This same "economy of scale" shows up in testing of mine rail cars. The theoretical Crr for a rigid wheel rolling on an elastic roadbed shows Crr inversely proportional to the square root of the load. If Crr is itself dependent on wheel load per an inverse square-root rule, then for an increase in load of 2% only a 1% increase in rolling resistance occurs. Pneumatic tires For pneumatic tires, the direction of change in Crr (rolling resistance coefficient) depends on whether or not tire inflation is increased with increasing load. It is reported that, if inflation pressure is increased with load according to an (undefined) "schedule", then a 20% increase in load decreases Crr by 3%. But, if the inflation pressure is not changed, then a 20% increase in load results in a 4% increase in Crr. Of course, this will increase the rolling resistance by 20% due to the increase in load plus 1.2 x 4% due to the increase in Crr resulting in a 24.8% increase in rolling resistance. Dependence on curvature of roadway General When a vehicle (motor vehicle or railroad train) goes around a curve, rolling resistance usually increases. If the curve is not banked so as to exactly counter the centrifugal force with an equal and opposing centripetal force due to the banking, then there will be a net unbalanced sideways force on the vehicle. This will result in increased rolling resistance. Banking is also known as "superelevation" or "cant" (not to be confused with rail cant of a rail). For railroads, this is called curve resistance but for roads it has (at least once) been called rolling resistance due to cornering. Sound Rolling friction generates sound (vibrational) energy, as mechanical energy is converted to this form of energy due to the friction. One of the most common examples of rolling friction is the movement of motor vehicle tires on a roadway, a process which generates sound as a by-product. The sound generated by automobile and truck tires as they roll (especially noticeable at highway speeds) is mostly due to the percussion of the tire treads, and compression (and subsequent decompression) of air temporarily captured within the treads. Factors that contribute in tires Several factors affect the magnitude of rolling resistance a tire generates: As mentioned in the introduction: wheel radius, forward speed, surface adhesion, and relative micro-sliding. Material - different fillers and polymers in tire composition can improve traction while reducing hysteresis. The replacement of some carbon black with higher-priced silica–silane is one common way of reducing rolling resistance. The use of exotic materials including nano-clay has been shown to reduce rolling resistance in high performance rubber tires. Solvents may also be used to swell solid tires, decreasing the rolling resistance. Dimensions - rolling resistance in tires is related to the flex of sidewalls and the contact area of the tire For example, at the same pressure, wider bicycle tires flex less in the sidewalls as they roll and thus have lower rolling resistance (although higher air resistance). Extent of inflation - Lower pressure in tires results in more flexing of the sidewalls and higher rolling resistance. This energy conversion in the sidewalls increases resistance and can also lead to overheating and may have played a part in the infamous Ford Explorer rollover accidents. Over inflating tires (such a bicycle tires) may not lower the overall rolling resistance as the tire may skip and hop over the road surface. Traction is sacrificed, and overall rolling friction may not be reduced as the wheel rotational speed changes and slippage increases. Sidewall deflection is not a direct measurement of rolling friction. A high quality tire with a high quality (and supple) casing will allow for more flex per energy loss than a cheap tire with a stiff sidewall. Again, on a bicycle, a quality tire with a supple casing will still roll easier than a cheap tire with a stiff casing. Similarly, as noted by Goodyear truck tires, a tire with a "fuel saving" casing will benefit the fuel economy through many tread lives (i.e. retreading), while a tire with a "fuel saving" tread design will only benefit until the tread wears down. In tires, tread thickness and shape has much to do with rolling resistance. The thicker and more contoured the tread, the higher the rolling resistance Thus, the "fastest" bicycle tires have very little tread and heavy duty trucks get the best fuel economy as the tire tread wears out. Diameter effects seem to be negligible, provided the pavement is hard and the range of diameters is limited. See dependence on diameter. Virtually all world speed records have been set on relatively narrow wheels, probably because of their aerodynamic advantage at high speed, which is much less important at normal speeds. Temperature: with both solid and pneumatic tires, rolling resistance has been found to decrease as temperature increases (within a range of temperatures: i.e. there is an upper limit to this effect) For a rise in temperature from 30 °C to 70 °C the rolling resistance decreased by 20-25%. Racers heat their tires before racing, but this is primarily used to increase tire friction rather than to decrease rolling resistance. Railroads: Components of rolling resistance In a broad sense rolling resistance can be defined as the sum of components): Wheel bearing torque losses. Pure rolling resistance. Sliding of the wheel on the rail. Loss of energy to the roadbed (and earth). Loss of energy to oscillation of railway rolling stock. Wheel bearing torque losses can be measured as a rolling resistance at the wheel rim, Crr. Railroads normally use roller bearings which are either cylindrical (Russia) or tapered (United States). The specific rolling resistance in bearings varies with both wheel loading and speed. Wheel bearing rolling resistance is lowest with high axle loads and intermediate speeds of 60–80 km/h with a Crr of 0.00013 (axle load of 21 tonnes). For empty freight cars with axle loads of 5.5 tonnes, Crr goes up to 0.00020 at 60 km/h but at a low speed of 20 km/h it increases to 0.00024 and at a high speed (for freight trains) of 120 km/h it is 0.00028. The Crr obtained above is added to the Crr of the other components to obtain the total Crr for the wheels. Comparing rolling resistance of highway vehicles and trains The rolling resistance of steel wheels on steel rail of a train is far less than that of the rubber tires wheels of an automobile or truck. The weight of trains varies greatly; in some cases they may be much heavier per passenger or per net ton of freight than an automobile or truck, but in other cases they may be much lighter. As an example of a very heavy passenger train, in 1975, Amtrak passenger trains weighed a little over 7 tonnes per passenger, which is much heavier than an average of a little over one ton per passenger for an automobile. This means that for an Amtrak passenger train in 1975, much of the energy savings of the lower rolling resistance was lost to its greater weight. An example of a very light high-speed passenger train is the N700 Series Shinkansen, which weighs 715 tonnes and carries 1323 passengers, resulting in a per-passenger weight of about half a tonne. This lighter weight per passenger, combined with the lower rolling resistance of steel wheels on steel rail means that an N700 Shinkansen is much more energy efficient than a typical automobile. In the case of freight, CSX ran an advertisement campaign in 2013 claiming that their freight trains move "a ton of freight 436 miles on a gallon of fuel", whereas some sources claim trucks move a ton of freight about 130 miles per gallon of fuel, indicating trains are more efficient overall. See also Coefficient of friction Low-rolling resistance tires Maglev (Magnetic Levitation, the elimination of rolling and thus rolling resistance) Rolling element bearing References Астахов П.Н. "Сопротивление движению железнодорожного подвижного состава" (Resistance to motion of railway rolling stock) Труды ЦНИИ МПС (ISSN 0372-3305). Выпуск 311 (Vol. 311). - Москва: Транспорт, 1966. – 178 pp. perm. record at UC Berkeley (In 2012, full text was on the Internet but the U.S. was blocked) Деев В.В., Ильин Г.А., Афонин Г.С. "Тяга поездов" (Traction of trains) Учебное пособие. - М.: Транспорт, 1987. - 264 pp. Hay, William W. "Railroad Engineering" New York, Wiley 1953 Hersey, Mayo D., "Rolling Friction" Transactions of the ASME, April 1969 pp. 260–275 and Journal of Lubrication Technology, January 1970, pp. 83–88 (one article split between two journals) Except for the "Historical Introduction" and a survey of the literature, it is mainly about laboratory testing of mine railroad cast iron wheels of diameters 8″ to 24 done in the 1920s (almost a half century delay between experiment and publication). Hoerner, Sighard F., "Fluid dynamic drag", published by the author, 1965. (Chapt. 12 is "Land-Borne Vehicles" and includes rolling resistance (trains, autos, trucks).) Roberts, G. B., "Power wastage in tires", International Rubber Conference, Washington, D.C. 1959. U.S National Bureau of Standards, "Mechanics of Pneumatic Tires", Monograph #132, 1969–1970. Williams, J. A. ''Engineering tribology'. Oxford University Press, 1994. External links Rolling Resistance and Fuel Saving temperature vs rolling resistance Simple roll-down test to measure Crr in cars and bikes Rolling Resistance Thresholds Classical mechanics Energy economics Energy in transport Transport economics Vehicle dynamics ko:마찰력#구름 마찰력 it:Attrito#Attrito volvente
Rolling resistance
[ "Physics", "Environmental_science" ]
5,852
[ "Energy economics", "Classical mechanics", "Physical systems", "Transport", "Mechanics", "Energy in transport", "Environmental social science" ]
1,503,867
https://en.wikipedia.org/wiki/DAPI
DAPI (pronounced 'DAPPY', /ˈdæpiː/), or 4′,6-diamidino-2-phenylindole, is a fluorescent stain that binds strongly to adenine–thymine-rich regions in DNA. It is used extensively in fluorescence microscopy. As DAPI can pass through an intact cell membrane, it can be used to stain both live and fixed cells, though it passes through the membrane less efficiently in live cells and therefore provides a marker for membrane viability. History DAPI was first synthesised in 1971 in the laboratory of Otto Dann as part of a search for drugs to treat trypanosomiasis. Although it was unsuccessful as a drug, further investigation indicated it bound strongly to DNA and became more fluorescent when bound. This led to its use in identifying mitochondrial DNA in ultracentrifugation in 1975, the first recorded use of DAPI as a fluorescent DNA stain. Strong fluorescence when bound to DNA led to the rapid adoption of DAPI for fluorescent staining of DNA for fluorescence microscopy. Its use for detecting DNA in plant, metazoa and bacteria cells and virus particles was demonstrated in the late 1970s, and quantitative staining of DNA inside cells was demonstrated in 1977. Use of DAPI as a DNA stain for flow cytometry was also demonstrated around this time. Fluorescence properties When bound to double-stranded DNA, DAPI has an absorption maximum at a wavelength of 358 nm (ultraviolet) and its emission maximum is at 461 nm (blue). Therefore, for fluorescence microscopy, DAPI is excited with ultraviolet light and is detected through a blue/cyan filter. The emission peak is fairly broad. DAPI will also bind to RNA, though it is not as strongly fluorescent. Its emission shifts to around 500 nm when bound to RNA. DAPI's blue emission is convenient for microscopists who wish to use multiple fluorescent stains in a single sample. There is some fluorescence overlap between DAPI and green-fluorescent molecules like fluorescein and green fluorescent protein (GFP) but the effect of this is small. Use of spectral unmixing can account for this effect if extremely precise image analysis is required. Outside of analytical fluorescence light microscopy DAPI is also popular for labeling of cell cultures to detect the DNA of contaminating Mycoplasma or virus. The labelled Mycoplasma or virus particles in the growth medium fluoresce once stained by DAPI making them easy to detect. Modelling of absorption and fluorescence properties This DNA fluorescent probe has been effectively modeled using the time-dependent density functional theory, coupled with the IEF version of the polarizable continuum model. This quantum-mechanical modeling has rationalized the absorption and fluorescence behavior given by minor groove binding and intercalation in the DNA pocket, in term of a reduced structural flexibility and polarization. Live cells and toxicity DAPI can be used for fixed cell staining. The concentration of DAPI needed for live cell staining is generally very high; it is rarely used for live cells. It is labeled non-toxic in its MSDS and though it was not shown to have mutagenicity to E. coli, it is labelled as a known mutagen in manufacturer information. As it is a small DNA binding compound, it is likely to have some carcinogenic effects and care should be taken in its handling and disposal. Alternatives The Hoechst stains are similar to DAPI in that they are also blue-fluorescent DNA stains which are compatible with both live- and fixed-cell applications, as well as visible using the same equipment filter settings as for DAPI. References See also DNA binding ligand Hoechst stain Lexitropsin Netropsin Pentamidine Staining dyes Fluorescent dyes DNA-binding substances Indoles Amidines
DAPI
[ "Chemistry", "Biology" ]
791
[ "Genetics techniques", "Amidines", "Functional groups", "DNA-binding substances", "Bases (chemistry)" ]
1,504,065
https://en.wikipedia.org/wiki/Biological%20target
A biological target is anything within a living organism to which some other entity (like an endogenous ligand or a drug) is directed and/or binds, resulting in a change in its behavior or function. Examples of common classes of biological targets are proteins and nucleic acids. The definition is context-dependent, and can refer to the biological target of a pharmacologically active drug compound, the receptor target of a hormone (like insulin), or some other target of an external stimulus. Biological targets are most commonly proteins such as enzymes, ion channels, and receptors. Mechanism The external stimulus (i.e., the drug or ligand) physically binds to ("hits") the biological target. The interaction between the substance and the target may be: noncovalent – A relatively weak interaction between the stimulus and the target where no chemical bond is formed between the two interacting partners and hence the interaction is completely reversible. reversible covalent – A chemical reaction occurs between the stimulus and target in which the stimulus becomes chemically bonded to the target, but the reverse reaction also readily occurs in which the bond can be broken. irreversible covalent – The stimulus is permanently bound to the target through irreversible chemical bond formation. Depending on the nature of the stimulus, the following can occur: There is no direct change in the biological target, but the binding of the substance prevents other endogenous substances (such as activating hormones) from binding to the target. Depending on the nature of the target, this effect is referred as receptor antagonism, enzyme inhibition, or ion channel blockade. A conformational change in the target is induced by the stimulus which results in a change in target function. This change in function can mimic the effect of the endogenous substance in which case the effect is referred to as receptor agonism (or channel or enzyme activation) or be the opposite of the endogenous substance which in the case of receptors is referred to as inverse agonism. Drug targets The term "biological target" is frequently used in pharmaceutical research to describe the native protein in the body whose activity is modified by a drug resulting in a specific effect, which may be a desirable therapeutic effect or an unwanted adverse effect. In this context, the biological target is often referred to as a drug target. The most common drug targets of currently marketed drugs include: proteins G protein-coupled receptors (target of 50% of drugs) enzymes (especially protein kinases, proteases, esterases, and phosphatases) ion channels ligand-gated ion channels voltage-gated ion channels nuclear hormone receptors structural proteins such as tubulin membrane transport proteins nucleic acids Drug target identification Identifying the biological origin of a disease, and the potential targets for intervention, is the first step in the discovery of a medicine using the reverse pharmacology approach. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. An alternative means of identifying new drug targets is forward pharmacology based on phenotypic screening to identify "orphan" ligands whose targets are subsequently identified through target deconvolution. Databases Databases containing biological targets information: Therapeutic Targets Database (TTD) DrugMap DrugBank Binding DB Conservation ecology These biological targets are conserved across species, making pharmaceutical pollution of the environment a danger to species who possess the same targets. For example, the synthetic estrogen in human contraceptives, 17-R-ethinylestradiol, has been shown to increase the feminization of fish downstream from sewage treatment plants, thereby unbalancing reproduction and creating an additional selective pressure on fish survival. Pharmaceuticals are usually found at ng/L to low-μg/L concentrations in the aquatic environment. Adverse effects may occur in non-target species as a consequence of specific drug target interactions. Therefore, evolutionarily well-conserved drug targets are likely to be associated with an increased risk for non-targeted pharmacological effects. See also Drug discovery Environmental impact of pharmaceuticals and personal care products References Pharmacology Biology terminology
Biological target
[ "Chemistry", "Biology" ]
835
[ "Pharmacology", "nan", "Medicinal chemistry" ]
1,504,755
https://en.wikipedia.org/wiki/Electronic%20flight%20instrument%20system
In aviation, an electronic flight instrument system (EFIS) is a flight instrument display system in an aircraft cockpit that displays flight data electronically rather than electromechanically. An EFIS normally consists of a primary flight display (PFD), multi-function display (MFD), and an engine indicating and crew alerting system (EICAS) display. Early EFIS models used cathode-ray tube (CRT) displays, but liquid crystal displays (LCD) are now more common. The complex electromechanical attitude director indicator (ADI) and horizontal situation indicator (HSI) were the first candidates for replacement by EFIS. Now, however, few flight deck instruments cannot be replaced by an electronic display. Display units Primary flight display (PFD) On the flight deck, the display units are the most obvious parts of an EFIS system, and are the features that lead to the term glass cockpit. The display unit that replaces the artificial horizon is called the primary flight display (PFD). If a separate display replaces the HSI, it is called the navigation display. The PFD displays all information critical to flight, including calibrated airspeed, altitude, heading, attitude, vertical speed and yaw. The PFD is designed to improve a pilot's situational awareness by integrating this information into a single display instead of six different analog instruments, reducing the amount of time necessary to monitor the instruments. PFDs also increase situational awareness by alerting the aircrew to unusual or potentially hazardous conditions — for example, low airspeed, high rate of descent — by changing the color or shape of the display or by providing audio alerts. The names Electronic Attitude Director Indicator and Electronic Horizontal Situation Indicator are used by some manufacturers. However, a simulated ADI is only the centerpiece of the PFD. Additional information is both superimposed on and arranged around this graphic. Multi-function displays can render a separate navigation display unnecessary. Another option is to use one large screen to show both the PFD and navigation display. The PFD and navigation display (and multi-function display, where fitted) are often physically identical. The information displayed is determined by the system interfaces where the display units are fitted. Thus, spares holding is simplified: the one display unit can be fitted in any position. LCD units generate less heat than CRTs; an advantage in a congested instrument panel. They are also lighter, and occupy a lower volume. Multi-function display (MFD) The MFD (multi-function display) displays navigational and weather information from multiple systems. MFDs are most frequently designed as "chart-centric", where the aircrew can overlay different information over a map or chart. Examples of MFD overlay information include the aircraft's current route plan, weather information from either on-board radar or lightning detection sensors or ground-based sensors, e.g., NEXRAD, restricted airspace and aircraft traffic. The MFD can also be used to view other non-overlay type of data (e.g., current route plan) and calculated overlay-type data, e.g., the glide radius of the aircraft, given current location over terrain, winds, and aircraft speed and altitude. MFDs can also display information about aircraft systems, such as fuel and electrical systems (see EICAS, below). As with the PFD, the MFD can change the color or shape of the data to alert the aircrew to hazardous situations. Engine indications and crew alerting system (EICAS) / electronic centralized aircraft monitoring (ECAM) EICAS (Engine Indications and Crew Alerting System) displays information about the aircraft's systems, including its fuel, electrical and propulsion systems (engines). EICAS displays are often designed to mimic traditional round gauges while also supplying digital readouts of the parameters. EICAS improves situational awareness by allowing the aircrew to view complex information in a graphical format and also by alerting the crew to unusual or hazardous situations. For example, if an engine begins to lose oil pressure, the EICAS might sound an alert, switch the display to the page with the oil system information and outline the low oil pressure data with a red box. Unlike traditional round gauges, many levels of warnings and alarms can be set. Proper care must be taken when designing EICAS to ensure that the aircrew are always provided with the most important information and not overloaded with warnings or alarms. ECAM is a similar system used by Airbus, which in addition to providing EICAS functions also recommend remedial action. Control panels EFIS provides pilots with controls that select display range and mode (for example, map or compass rose) and enter data (such as selected heading). Where other equipment uses pilot inputs, data buses broadcast the pilot's selections so that the pilot need only enter the selection once. For example, the pilot selects the desired level-off altitude on a control unit. The EFIS repeats this selected altitude on the PFD, and by comparing it with the actual altitude (from the air data computer) generates an altitude error display. This same altitude selection is used by the automatic flight control system to level off, and by the altitude alerting system to provide appropriate warnings. Data processors The EFIS visual display is produced by the symbol generator. This receives data inputs from the pilot, signals from sensors, and EFIS format selections made by the pilot. The symbol generator can go by other names, such as display processing computer, display electronics unit, etc. The symbol generator does more than generate symbols. It has (at the least) monitoring facilities, a graphics generator and a display driver. Inputs from sensors and controls arrive via data buses, and are checked for validity. The required computations are performed, and the graphics generator and display driver produce the inputs to the display units. Capabilities Like personal computers, flight instrument systems need power-on-self-test facilities and continuous self-monitoring. Flight instrument systems, however, need additional monitoring capabilities: Input validation — verify that each sensor is providing valid data Data comparison — cross check inputs from duplicated sensors Display monitoring — detect failures within the instrument system Former practice Traditional (electromechanical) displays are equipped with synchro mechanisms that transmit the pitch, roll, and heading shown on the captain and first officer's instruments to an instrument comparator. The comparator warns of excessive differences between the captain and first officer displays. Even a fault as far downstream as a jam in, say, the roll mechanism of an ADI triggers a comparator warning. The instrument comparator thus provides both comparator monitoring and display monitoring. Comparator monitoring With EFIS, the comparator function is simple: Is roll data (bank angle) from sensor 1 the same as roll data from sensor 2? If not, display a warning caption (such as CHECK ROLL) on both PFDs. Comparison monitors give warnings for airspeed, pitch, roll, and altitude indications. More advanced EFIS systems have more comparator monitors. Display monitoring In this technique, each symbol generator contains two display monitoring channels. One channel, the internal, samples the output from its own symbol generator to the display unit and computes, for example, what roll attitude should produce that indication. This computed roll attitude is then compared with the roll attitude input to the symbol generator from the INS or AHRS. Any difference has probably been introduced by faulty processing, and triggers a warning on the relevant display. The external monitoring channel carries out the same check on the symbol generator on the other side of the flight deck: the Captain's symbol generator checks the First Officer's, the First Officer's checks the Captain's. Whichever symbol generator detects a fault, puts up a warning on its own display. The external monitoring channel also checks sensor inputs (to the symbol generator) for reasonableness. A spurious input, such as a radio height greater than the radio altimeter's maximum, results in a warning. Human factors Clutter At various stages of a flight, a pilot needs different combinations of data. Ideally, the avionics only show the data in use—but an electromechanical instrument must be in view all the time. To improve display clarity, ADIs and HSIs use intricate mechanisms to remove superfluous indications temporarily—e.g., removing the glide slope scale when the pilot doesn't need it. Under normal conditions, an EFIS might not display some indications, e.g., engine vibration. Only when some parameter exceeds its limits does the system display the reading. In similar fashion, EFIS is programmed to show the glideslope scale and pointer only during an ILS approach. In the case of an input failure, an electromechanical instrument adds yet another indicator—typically, a bar drops across the erroneous data. EFIS, on the other hand, removes invalid data from the display and substitutes an appropriate warning. A de-clutter mode activates automatically when circumstances require the pilot's attention for a specific item. For example, if the aircraft pitches up or down beyond a specified limit—usually 30 to 60 degrees—the attitude indicator de-clutters other items from sight until the pilot brings the pitch to an acceptable level. This helps the pilot focus on the most important tasks. Color Traditional instruments have long used color, but lack the ability to change a color to indicate some change in condition. The electronic display technology of EFIS has no such restriction and uses color widely. For example, as an aircraft approaches the glide slope, a blue caption can indicate glide slope is armed, and capture might change the color to green. Typical EFIS systems color code the navigation needles to reflect the type of navigation. Green needles indicate ground-based navigation, such as VORs, Localizers and ILS systems. Magenta needles indicate GPS navigation. Advantages EFIS provides versatility by avoiding some physical limitations of traditional instruments. A pilot can switch the same display that shows a course deviation indicator to show the planned track provided by an area navigation or flight management system. Pilots can choose to superimpose the weather radar picture on the displayed route. The flexibility afforded by software modifications minimises the costs of responding to new aircraft regulations and equipment. Software updates can update an EFIS system to extend its capabilities. Updates introduced in the 1990s included the ground proximity warning system and traffic collision avoidance system. A degree of redundancy is available even with the simple two-screen EFIS installation. Should the PFD fail, transfer switching repositions its vital information to the screen normally occupied by the navigation display. Advances in EFIS In the late 1980s, EFIS became standard equipment on most Boeing and Airbus airliners, and many business aircraft adopted EFIS in the 1990s. Recent advances in computing power and reductions in the cost of liquid-crystal displays and navigational sensors (such as GPS and attitude and heading reference system) have brought EFIS to general aviation aircraft. Notable examples are the Garmin G1000 and Chelton Flight Systems EFIS-SV. Several EFIS manufacturers have focused on the experimental aircraft market, producing EFIS and EICAS systems for as little as US$1,000-2000. The low cost is possible because of steep drops in the price of sensors and displays, and equipment for experimental aircraft doesn't require expensive Federal Aviation Administration certification. This latter point restricts their use to experimental aircraft and certain other aircraft categories, depending on local regulations. Uncertified EFIS systems are also found in Light-sport aircraft, including factory built, microlight, and ultralight aircraft. These systems can be fitted to certified aircraft in some cases as secondary or backup systems depending on local aviation rules. See also Index of aviation articles Acronyms and abbreviations in avionics Notes Further reading Advisory Circular AC25-11A Electronic Flight Deck Displays, at the U.S. Federal Aviation Administration Electronic Aircraft Instruments Air Data Computer and Displays Avionics Aircraft instruments Applications of control engineering Display technology Glass cockpit Navigational flight instruments ja:グラスコックピット#電子飛行計器システム
Electronic flight instrument system
[ "Technology", "Engineering" ]
2,558
[ "Avionics", "Measuring instruments", "Glass cockpit", "Electronic engineering", "Control engineering", "Aircraft instruments", "Display technology", "Applications of control engineering", "Navigational flight instruments" ]
1,504,792
https://en.wikipedia.org/wiki/Nascent%20hydrogen
Nascent hydrogen is an outdated concept in organic chemistry that was once invoked to explain dissolving-metal reactions, such as the Clemmensen reduction and the Bouveault–Blanc reduction. Since organic compounds do not react with H2, a special state of hydrogen was postulated. It is now understood that dissolving-metal reactions occur at the metal surface, and the concept of nascent hydrogen has been discredited in organic chemistry. However, the formation of atomic hydrogen is largely invoked in inorganic chemistry and corrosion sciences to explain hydrogen embrittlement in metals exposed to electrolysis and anaerobic corrosion (e.g., dissolution of zinc in strong acids (HCl) and aluminium in strong bases (NaOH)). The mechanism of hydrogen embrittlement was first proposed by Johnson (1875). The inability of hydrogen atoms to react with organic reagents in organic solvents does not exclude the transient formation of hydrogen atoms capable to immediately diffuse into the crystal lattice of common metals (steel, titanium) different from these of the platinoid group (Pt, Pd, Rh, Ru, Ni) which are well known to dissociate molecular dihydrogen (H) into atomic hydrogen. History The idea of hydrogen in the nascent state having chemical properties different from those of molecular hydrogen developed the mid-19th century. Alexander Williamson repeatedly refers to nascent hydrogen in his textbook Chemistry for Students, for example writing of the substitution reaction of carbon tetrachloride with hydrogen to form products such as chloroform and dichloromethane that the "hydrogen must for this purpose be in the nascent state, as free hydrogen does not produce the effect". Williamson also describes the use of nascent hydrogen in the earlier work of Marcellin Berthelot. Franchot published a paper on the concept in 1896, which drew a strongly worded response from Tommasi who pointed to his own work that concluded "nascent hydrogen is nothing else than H + x calories". The term "nascent hydrogen" continued to be invoked into the 20th century. Reducing agents at low and high pH Devarda's alloy (alloy of aluminium (~45%), copper (~50%) and zinc (~5%)) is a reducing agent that was commonly used in wet analytical chemistry to produce in situ so-called nascent hydrogen under alkaline conditions for the determination of nitrates () after their reduction into ammonia (). In the Marsh test, used for arsenic determination (from the reduction of arsenate () and arsenite () into arsine ()), hydrogen is generated by contacting zinc powder with hydrochloric acid. So, hydrogen can be conveniently produced at low or high pH, according to the volatility of the species to be detected. Acid conditions in the Marsh test promote the fast escape of the arsine gas (AsH3), while under hyperalkaline solution, the degassing of the reduced ammonia (NH3) is greatly facilitated (the ammonium ion being soluble in aqueous solution under acidic conditions). See also Atomic hydrogen welding References Further reading Hydrogen Electrolysis Hydrogen Obsolete theories in chemistry
Nascent hydrogen
[ "Physics", "Chemistry" ]
666
[ "Periodic table", "Properties of chemical elements", "Allotropes", "Electrochemistry", "Materials", "Electrolysis", "Matter" ]
1,505,128
https://en.wikipedia.org/wiki/Harvard%E2%80%93Smithsonian%20Center%20for%20Astrophysics
The Center for Astrophysics | Harvard & Smithsonian (CfA), previously known as the Harvard–Smithsonian Center for Astrophysics, is an astrophysics research institute jointly operated by the Harvard College Observatory and Smithsonian Astrophysical Observatory. Founded in 1973 and headquartered in Cambridge, Massachusetts, United States, the CfA leads a broad program of research in astronomy, astrophysics, Earth and space sciences, as well as science education. The CfA either leads or participates in the development and operations of more than fifteen ground- and space-based astronomical research observatories across the electromagnetic spectrum, including the forthcoming Giant Magellan Telescope (GMT) and the Chandra X-ray Observatory, one of NASA's Great Observatories. Hosting more than 850 scientists, engineers, and support staff, the CfA is among the largest astronomical research institutes in the world. Its projects have included Nobel Prize-winning advances in cosmology and high energy astrophysics, the discovery of many exoplanets, and the first image of a black hole. The CfA also serves a major role in the global astrophysics research community: the CfA's Astrophysics Data System (ADS), for example, has been universally adopted as the world's online database of astronomy and physics papers. Known for most of its history as the "Harvard-Smithsonian Center for Astrophysics", the CfA rebranded in 2018 to its current name in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. The CfA's current director (since 2022) is Lisa Kewley, who succeeds Charles R. Alcock (Director from 2004 to 2022), Irwin I. Shapiro (Director from 1982 to 2004) and George B. Field (Director from 1973 to 1982). History of the CfA The Center for Astrophysics | Harvard & Smithsonian is not formally an independent legal organization, but rather an institutional entity operated under a memorandum of understanding between Harvard University and the Smithsonian Institution. This collaboration was formalized on July 1, 1973, with the goal of coordinating the related research activities of the Harvard College Observatory (HCO) and the Smithsonian Astrophysical Observatory (SAO) under the leadership of a single director, and housed within the same complex of buildings on the Harvard campus in Cambridge, Massachusetts. The CfA's history is therefore also that of the two fully independent organizations that comprise it. With a combined history of more than 300 years, HCO and SAO have been host to major milestones in astronomical history that predate the CfA's founding. These are briefly summarized below. History of the Smithsonian Astrophysical Observatory (SAO) Samuel Pierpont Langley, the third Secretary of the Smithsonian, founded the Smithsonian Astrophysical Observatory on the south yard of the Smithsonian Castle (on the U.S. National Mall) on March 1, 1890. The Astrophysical Observatory's initial, primary purpose was to "record the amount and character of the Sun's heat". Charles Greeley Abbot was named SAO's first director, and the observatory operated solar telescopes to take daily measurements of the Sun's intensity in different regions of the optical electromagnetic spectrum. In doing so, the observatory enabled Abbot to make critical refinements to the Solar constant, as well as to serendipitously discover Solar variability. It is likely that SAO's early history as a solar observatory was part of the inspiration behind the Smithsonian's "sunburst" logo, designed in 1965 by Crimilda Pontes. In 1955, the scientific headquarters of SAO moved from Washington, D.C. to Cambridge, Massachusetts, to affiliate with the Harvard College Observatory (HCO). Fred Lawrence Whipple, then the chairman of the Harvard Astronomy Department, was named the new director of SAO. The collaborative relationship between SAO and HCO therefore predates the official creation of the CfA by 18 years. SAO's move to Harvard's campus also resulted in a rapid expansion of its research program. Following the launch of Sputnik (the world's first human-made satellite) in 1957, SAO accepted a national challenge to create a worldwide satellite-tracking network, collaborating with the United States Air Force on Project Space Track. With the creation of NASA the following year and throughout the Space Race, SAO led major efforts in the development of orbiting observatories and large ground-based telescopes, laboratory and theoretical astrophysics, as well as the application of computers to astrophysical problems. History of Harvard College Observatory (HCO) Partly in response to renewed public interest in astronomy following the 1835 return of Halley's Comet, the Harvard College Observatory was founded in 1839, when the Harvard Corporation appointed William Cranch Bond as an "Astronomical Observer to the University". For its first four years of operation, the observatory was situated at the Dana-Palmer House (where Bond also resided) near Harvard Yard, and consisted of little more than three small telescopes and an astronomical clock. In his 1840 book recounting the history of the college, then Harvard President Josiah Quincy III noted that "there is wanted a reflecting telescope equatorially mounted". This telescope, the 15-inch "Great Refractor", opened seven years later (in 1847) at the top of Observatory Hill in Cambridge (where it still exists today, housed in the oldest of the CfA's complex of buildings). The telescope was the largest in the United States from 1847 until 1867. William Bond and pioneer photographer John Adams Whipple used the Great Refractor to produce the first clear Daguerrotypes of the Moon (winning them an award at the 1851 Great Exhibition in London). Bond and his son, George Phillips Bond (the second director of HCO), used it to discover Saturn's 8th moon, Hyperion (which was also independently discovered by William Lassell). Under the directorship of Edward Charles Pickering from 1877 to 1919, the observatory became the world's major producer of stellar spectra and magnitudes, established an observing station in Peru, and applied mass-production methods to the analysis of data. It was during this time that HCO became host to a series of major discoveries in astronomical history, powered by the observatory's so-called "Computers" (women hired by Pickering as skilled workers to process astronomical data). These "Computers" included Williamina Fleming, Annie Jump Cannon, Henrietta Swan Leavitt, Florence Cushman and Antonia Maury, all widely recognized today as major figures in scientific history. Henrietta Swan Leavitt, for example, discovered the so-called period-luminosity relation for Classical Cepheid variable stars, establishing the first major "standard candle" with which to measure the distance to galaxies. Now called "Leavitt's law", the discovery is regarded as one of the most foundational and important in the history of astronomy; astronomers like Edwin Hubble, for example, would later use Leavitt's law to establish that the Universe is expanding, the primary piece of evidence for the Big Bang model. Upon Pickering's retirement in 1921, the directorship of HCO fell to Harlow Shapley (a major participant in the so-called "Great Debate" of 1920). This era of the observatory was made famous by the work of Cecelia Payne-Gaposchkin, who became the first woman to earn a PhD in astronomy from Radcliffe College (a short walk from the observatory). Payne-Gapochkin's 1925 thesis proposed that stars were composed primarily of hydrogen and helium, an idea thought ridiculous at the time. Between Shapley's tenure and the formation of the CfA, the observatory was directed by Donald H. Menzel and then Leo Goldberg, both of whom maintained widely recognized programs in solar and stellar astrophysics. Menzel played a major role in encouraging the Smithsonian Astrophysical Observatory to move to Cambridge and collaborate more closely with HCO. Joint history as the Center for Astrophysics (CfA) The collaborative foundation for what would ultimately give rise to the Center for Astrophysics began with SAO's move to Cambridge in 1955. Fred Whipple, who was already chair of the Harvard Astronomy Department (housed within HCO since 1931), was named SAO's new director at the start of this new era; an early test of the model for a unified directorship across HCO and SAO. The following 18 years would see the two independent entities merge ever closer together, operating effectively (but informally) as one large research center. This joint relationship was formalized as the new Harvard–Smithsonian Center for Astrophysics on July 1, 1973. George B. Field, then affiliated with Berkeley, was appointed as its first director. That same year, a new astronomical journal, the CfA Preprint Series was created, and a CfA/SAO instrument flying aboard Skylab discovered coronal holes on the Sun. The founding of the CfA also coincided with the birth of X-ray astronomy as a new, major field that was largely dominated by CfA scientists in its early years. Riccardo Giacconi, regarded as the "father of X-ray astronomy", founded the High Energy Astrophysics Division within the new CfA by moving most of his research group (then at American Sciences and Engineering) to SAO in 1973. That group would later go on to launch the Einstein Observatory (the first imaging X-ray telescope) in 1976, and ultimately lead the proposals and development of what would become the Chandra X-ray Observatory. Chandra, the second of NASA's Great Observatories and still the most powerful X-ray telescope in history, continues operations today as part of the CfA's Chandra X-ray Center. Giacconi would later win the 2002 Nobel Prize in Physics for his foundational work in X-ray astronomy. Shortly after the launch of the Einstein Observatory, the CfA's Steven Weinberg won the 1979 Nobel Prize in Physics for his work on electroweak unification. The following decade saw the start of the landmark CfA Redshift Survey (the first attempt to map the large scale structure of the Universe), as well as the release of the "Field Report", a highly influential Astronomy and Astrophysics Decadal Survey chaired by the outgoing CfA Director George Field. He would be replaced in 1982 by Irwin Shapiro, who during his tenure as director (1982 to 2004) oversaw the expansion of the CfA's observing facilities around the world, including the newly named Fred Lawrence Whipple Observatory, the Infrared Telescope (IRT) aboard the Space Shuttle, the 6.5-meter Multiple Mirror Telescope (MMT), the SOHO satellite, and the launch of Chandra in 1999. CfA-led discoveries throughout this period include canonical work on Supernova 1987A, the "CfA2 Great Wall" (then the largest known coherent structure in the Universe), the best-yet evidence for supermassive black holes, and the first convincing evidence for an extrasolar planet. The 1980s also saw the CfA play a distinct role in the history of computer science and the internet: in 1986, SAO started developing SAOImage, one of the world's first X11-based applications made publicly available (its successor, DS9, remains the most widely used astronomical FITS image viewer worldwide). During this time, scientists and software developers at the CfA also began work on what would become the Astrophysics Data System (ADS), one of the world's first online databases of research papers. By 1993, the ADS was running the first routine transatlantic queries between databases, a foundational aspect of the internet today. The CfA today Research at the CfA Charles Alcock, known for a number of major works related to massive compact halo objects, was named the third director of the CfA in 2004. Today Alcock oversees one of the largest and most productive astronomical institutes in the world, with more than 850 staff and an annual budget in excess of $100 million. The Harvard Department of Astronomy, housed within the CfA, maintains a continual complement of approximately 60 PhD students, more than 100 postdoctoral researchers, and roughly 25 undergraduate astronomy and astrophysics majors from Harvard College. SAO, meanwhile, hosts a long-running and highly rated REU Summer Intern program as well as many visiting graduate students. The CfA estimates that roughly 10% of the professional astrophysics community in the United States spent at least a portion of their career or education there. The CfA is either a lead or major partner in the operations of the Fred Lawrence Whipple Observatory, the Submillimeter Array, MMT Observatory, the South Pole Telescope, VERITAS, and a number of other smaller ground-based telescopes. The CfA's 2019–2024 Strategic Plan includes the construction of the Giant Magellan Telescope as a driving priority for the center. Along with the Chandra X-ray Observatory, the CfA plays a central role in a number of space-based observing facilities, including the recently launched Parker Solar Probe, Kepler space telescope, the Solar Dynamics Observatory (SDO), and Hinode. The CfA, via the Smithsonian Astrophysical Observatory, recently played a major role in the Lynx X-ray Observatory, a NASA-funded large mission concept study commissioned as part of the 2020 Astronomy and Astrophysics Decadal Survey ("Astro2020"). If launched, Lynx would be the most powerful X-ray observatory constructed to date, enabling order-of-magnitude advances in capability over Chandra. SAO is one of the 13 stakeholder institutes for the Event Horizon Telescope Board, and the CfA hosts its Array Operations Center. In 2019, the project revealed the first direct image of a black hole. The result is widely regarded as a triumph not only of observational astronomy, but of its intersection with theoretical astrophysics. Union of the observational and theoretical subfields of astrophysics has been a major focus of the CfA since its founding. In 2018, the CfA rebranded, changing its official name to the "Center for Astrophysics | Harvard & Smithsonian" in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. Today, the CfA receives roughly 70% of its funding from NASA, 22% from Smithsonian federal funds, and 4% from the National Science Foundation. The remaining 4% comes from contributors including the United States Department of Energy, the Annenberg Foundation, as well as other gifts and endowments. Organizational structure Research across the CfA is organized into six divisions and seven research centers: Scientific divisions within the CfA Atomic and Molecular Physics (AMP) High Energy Astrophysics (HEA) Optical and Infrared Astronomy (OIR) Radio and Geoastronomy (RG) Solar, Stellar, and Planetary Sciences (SSP) Theoretical Astrophysics (TA) Centers hosted at the CfA Chandra X-ray Center (CXC), the science operations center for NASA's Chandra X-ray Observatory Institute for Theory and Computation (ITC) Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP) Center for Parallel Astrophysical Computing (CPAC) Minor Planet Center (MPC) Telescope Data Center (TDC) Radio Telescope Data Center (RTDC) Solar & Stellar X-ray Group (SSXG) The CfA is also host to the Harvard University Department of Astronomy, large central engineering and computation facilities, the Science Education Department, the John G. Wolbach Library, the world's largest database of astronomy and physics papers (ADS), and the world's largest collection of astronomical photographic plates. Observatories operated with CfA participation Ground-based observatories Fred Lawrence Whipple Observatory Magellan telescopes MMT Observatory Event Horizon Telescope South Pole Telescope Submillimeter Array 1.2-Meter Millimeter-Wave Telescope Very Energetic Radiation Imaging Telescope Array System (VERITAS) Space-based observatories and probes Chandra X-ray Observatory Transiting Exoplanet Survey Satellite (TESS) Parker Solar Probe Hinode Kepler Solar Dynamics Observatory (SDO) Solar and Heliospheric Observatory (SOHO) Spitzer Space Telescope Planned future observatories Lynx X-ray Observatory Giant Magellan Telescope Murchison Widefield Array Square Kilometer Array Pan-STARRS Vera C. Rubin Observatory (formerly called the Large Synoptic Survey Telescope) See also Clara Sousa-Silva, research scientist List of astronomical observatories References External links 01 Astronomical observatories in Massachusetts Astronomy institutes and departments Astrophysics research institutes Harvard University research institutes Smithsonian Institution research programs Research institutes established in 1973 1973 establishments in Massachusetts Harvard University buildings
Harvard–Smithsonian Center for Astrophysics
[ "Physics", "Astronomy" ]
3,351
[ "Astronomy organizations", "Astrophysics research institutes", "Astrophysics", "Astronomy institutes and departments" ]
1,505,215
https://en.wikipedia.org/wiki/Tip%20of%20the%20red-giant%20branch
Tip of the red-giant branch (TRGB) is a primary distance indicator used in astronomy. It uses the luminosity of the brightest red-giant-branch stars in a galaxy as a standard candle to gauge the distance to that galaxy. It has been used in conjunction with observations from the Hubble Space Telescope to determine the relative motions of the Local Cluster of galaxies within the Local Supercluster. Ground-based, 8-meter-class telescopes like the VLT are also able to measure the TRGB distance within reasonable observation times in the local universe. Method The Hertzsprung–Russell diagram (HR diagram) is a plot of stellar luminosity versus surface temperature for a population of stars. During the core hydrogen burning phase of a Sun-like star's lifetime, it will appear on the HR diagram at a position along a diagonal band called the main sequence. When the hydrogen at the core is exhausted, energy will continue to be generated by hydrogen fusion in a shell around the core. The center of the star will accumulate the helium "ash" from this fusion and the star will migrate along an evolutionary branch of the HR diagram that leads toward the upper right. That is, the surface temperature will decrease and the total energy output (luminosity) of the star will increase as the surface area increases. At a certain point, the helium at the core of the star will reach a pressure and temperature where it can begin to undergo nuclear fusion through the triple-alpha process. For a star with less than 1.8 times the mass of the Sun, this will occur in a process called the helium flash. The evolutionary track of the star will then carry it toward the left of the HR diagram as the surface temperature increases under the new equilibrium. The result is a sharp discontinuity in the evolutionary track of the star on the HR diagram. This discontinuity is called the tip of the red-giant branch. When distant stars at the TRGB are measured in the I-band (in the infrared), their luminosity is somewhat insensitive to their composition of elements heavier than helium (metallicity) or their mass; they are a standard candle with an I-band absolute magnitude of –4.0±0.1. This makes the technique especially useful as a distance indicator. The TRGB indicator uses stars in the old stellar populations (Population II). See also Asymptotic giant branch Hess diagram Red clump Stellar classification References External links Large-scale structure of the cosmos Physical cosmology Red giants Standard candles
Tip of the red-giant branch
[ "Physics", "Astronomy" ]
524
[ "Standard candles", "Theoretical physics", "Astrophysics", "Physical cosmology", "Astronomical sub-disciplines" ]
1,505,381
https://en.wikipedia.org/wiki/Numerical%20weather%20prediction
Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs. Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires. Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions. A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models. History The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures originally developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a highly simplified approximation to the atmospheric governing equations. In 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e., a routine prediction for practical use). Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy and Weather Bureau. In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model. Following Phillips' work, several groups began working to create general circulation models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s. By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts. The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS). Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. Initialization The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. The data are then used in the model as the starting point for a forecast. A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific. Computation An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical. These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a time step. This future atmospheric state is then used as the starting point for another application of the predictive equations to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified Model is run six days into the future, while the European Centre for Medium-Range Weather Forecasts' Integrated Forecast System and Environment Canada's Global Environmental Multiscale Model both run out to ten days into the future, and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future. The visual output produced by a model solution is known as a prognostic chart, or prog. Parameterization Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. Parameterization is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between and in length. A typical cumulus cloud has a scale of less than , and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air within a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sizes between can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale. The formation of large-scale (stratus-type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. The cloud fraction can be related to this critical value of relative humidity. The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag. This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes. Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes. Domains The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain (boundary conditions) in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself. The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about ) level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates. This coordinate system receives its name from the independent variable used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain. Model output statistics Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s. Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds. Ensembles In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days, making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill. Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. These uncertainties limit forecast model accuracy to about five or six days into the future. Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere. Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere. Since the 1990s, ensemble forecasts have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions. Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. The ECMWF model, the Ensemble Prediction System, uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding. The UK Met Office runs global and regional ensemble forecasts where perturbations to initial conditions are used by 24 ensemble members in the Met Office Global and Regional Ensemble Prediction System (MOGREPS) to produce 24 different forecasts. In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty; this problem becomes particularly severe for forecasts of the weather about ten days in advance. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general. Despite this perception, a spread-skill relationship is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7. In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting, and it has been shown to improve forecasts when compared to a single model-based approach. Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting. This type of forecast significantly reduces errors in model output. Applications Air quality modeling Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their transport, or mean velocity of movement through the atmosphere, their diffusion, chemical transformation, and ground deposition. In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface, which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts. Climate modeling A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the global circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change. For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate. Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. When run for multiple decades, computational limitations mean that the models must use a coarse grid that leaves smaller-scale interactions unresolved. Ocean surface modeling The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface. Tropical cyclone forecasting Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models. In 1978, the first hurricane-tracking model based on atmospheric dynamics—the movable fine-mesh (MFM) model—began operating. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance. Wildfire modeling On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose, or wood fuels, in wildfires. When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion. When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes to become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere. A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction–diffusion systems of partial differential equations. More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated. The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under , forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally. See also Atmospheric physics Atmospheric thermodynamics Tropical cyclone forecast model Types of atmospheric models References Further reading From Turbulence to sCl External links NOAA Supercomputer upgrade Air Resources Laboratory Fleet Numerical Meteorology and Oceanography Center European Centre for Medium-Range Weather Forecasts UK Met Office Computational science Numerical climate and weather models Applied mathematics Weather prediction Computational fields of study
Numerical weather prediction
[ "Physics", "Mathematics", "Technology" ]
5,123
[ "Weather prediction", "Physical phenomena", "Computational fields of study", "Weather", "Applied mathematics", "Computational science", "Computing and society" ]
1,505,811
https://en.wikipedia.org/wiki/Phosphosilicate%20glass
Phosphosilicate glass, commonly referred to by the acronym PSG, is a silicate glass commonly used in semiconductor device fabrication for intermetal layers, i.e., insulating layers deposited between succeedingly higher metal or conducting layers, due to its effect in gettering alkali ions. Another common type of phosphosilicate glass is borophosphosilicate glass (BPSG). Soda-lime phosphosilicate glasses also form the basis for bioactive glasses (e.g. Bioglass), a family of materials which chemically convert to mineralised bone (hydroxy-carbonate-apatite) in physiological fluid. Bismuth doped phosphosilicate glasses are being explored for use as the active gain medium in fiber lasers for fiber-optic communication. See also Wafer (electronics) References Glass compositions Semiconductor device fabrication
Phosphosilicate glass
[ "Chemistry", "Materials_science" ]
188
[ "Glass compositions", "Glass chemistry", "Semiconductor device fabrication", "Microtechnology" ]
1,505,829
https://en.wikipedia.org/wiki/Borophosphosilicate%20glass
Borophosphosilicate glass, commonly known as BPSG, is a type of silicate glass that includes additives of both boron and phosphorus. Silicate glasses such as PSG and borophosphosilicate glass are commonly used in semiconductor device fabrication for intermetal layers, i.e., insulating layers deposited between succeedingly higher metal or conducting layers. BPSG has been implicated in increasing a device's susceptibility to soft errors since the boron-10 isotope is good at capturing thermal neutrons from cosmic radiation. It then undergoes fission producing a gamma ray, an alpha particle, and a lithium ion. These products may then dump charge into nearby structures, causing data loss (bit flipping, or single event upset). In critical designs, depleted boron consisting almost entirely of boron-11 is used to avoid this effect as a radiation hardening measure. Boron-11 is a by-product of the nuclear industry. References Semiconductor device fabrication Glass compositions Boron compounds
Borophosphosilicate glass
[ "Chemistry", "Materials_science" ]
212
[ "Glass compositions", "Glass chemistry", "Semiconductor device fabrication", "Microtechnology" ]
8,603,211
https://en.wikipedia.org/wiki/Plasma-enhanced%20chemical%20vapor%20deposition
Plasma-enhanced chemical vapor deposition (PECVD) is a chemical vapor deposition process used to deposit thin films from a gas state (vapor) to a solid state on a substrate. Chemical reactions are involved in the process, which occur after creation of a plasma of the reacting gases. The plasma is generally created by radio frequency (RF) alternating current (AC) frequency or direct current (DC) discharge between two electrodes, the space between which is filled with the reacting gases. Discharges for processes A plasma is any gas in which a significant percentage of the atoms or molecules are ionized. Fractional ionization in plasmas used for deposition and related materials processing varies from about 10−4 in typical capacitive discharges to as high as 5–10% in high-density inductive plasmas. Processing plasmas are typically operated at pressures of a few millitorrs to a few torr, although arc discharges and inductive plasmas can be ignited at atmospheric pressure. Plasmas with low fractional ionization are of great interest for materials processing because electrons are so light, compared to atoms and molecules, that energy exchange between the electrons and neutral gas is very inefficient. Therefore, the electrons can be maintained at very high equivalent temperatures – tens of thousands of kelvins, equivalent to several electronvolts average energy—while the neutral atoms remain at the ambient temperature. These energetic electrons can induce many processes that would otherwise be very improbable at low temperatures, such as dissociation of precursor molecules and the creation of large quantities of free radicals. The second benefit of deposition within a discharge arises from the fact that electrons are more mobile than ions. As a consequence, the plasma is normally more positive than any object it is in contact with, as otherwise, a large flux of electrons would flow from the plasma to the object. The difference in voltage between the plasma and the objects in its contacts normally occurs across a thin sheath region. Ionized atoms or molecules that diffuse to the edge of the sheath region feel an electrostatic force and are accelerated towards the neighboring surface. Thus, all surfaces exposed to the plasma receive energetic ion bombardment. The potential across the sheath surrounding an electrically isolated object (the floating potential) is typically only 10–20 V, but much higher sheath potentials are achievable by adjustments in reactor geometry and configuration. Thus, films can be exposed to energetic ion bombardment during deposition. This bombardment can lead to increases in the density of the film, and help remove contaminants, improving the film's electrical and mechanical properties. When a high-density plasma is used, the ion density can be high enough that significant sputtering of the deposited film occurs; this sputtering can be employed to help planarize the film and fill trenches or holes. Reactor types A simple DC discharge can be readily created at a few torr between two conductive electrodes, and may be suitable for deposition of conductive materials. However, insulating films will quickly extinguish this discharge as they are deposited. It is more common to excite a capacitive discharge by applying an AC or RF signal between an electrode and the conductive walls of a reactor chamber, or between two cylindrical conductive electrodes facing one another. The latter configuration is known as a parallel plate reactor. Frequencies of a few tens of Hz to a few thousand Hz will produce time-varying plasmas that are repeatedly initiated and extinguished; frequencies of tens of kilohertz to tens of megahertz result in reasonably time-independent discharges. Excitation frequencies in the low-frequency (LF) range, usually around 100 kHz, require several hundred volts to sustain the discharge. These large voltages lead to high-energy ion bombardment of surfaces. High-frequency plasmas are often excited at the standard 13.56 MHz frequency widely available for industrial use; at high frequencies, the displacement current from sheath movement and scattering from the sheath assist in ionization, and thus lower voltages are sufficient to achieve higher plasma densities. Thus one can adjust the chemistry and ion bombardment in the deposition by changing the frequency of excitation, or by using a mixture of low- and high-frequency signals in a dual-frequency reactor. Excitation power of tens to hundreds of watts is typical for an electrode with a diameter of 200 to 300 mm. Capacitive plasmas are usually very lightly ionized, resulting in limited dissociation of precursors and low deposition rates. Much denser plasmas can be created using inductive discharges, in which an inductive coil excited with a high-frequency signal induces an electric field within the discharge, accelerating electrons in the plasma itself rather than just at the sheath edge. Electron cyclotron resonance reactors and helicon wave antennas have also been used to create high-density discharges. Excitation powers of 10 kW or more are often used in modern reactors. High density plasmas can also be generated by a DC discharge in an electron-rich environment, obtained by thermionic emission from heated filaments. The voltages required by the arc discharge are of the order of a few tens of volts, resulting in low energy ions. The high density, low energy plasma is exploited for the epitaxial deposition at high rates in low-energy plasma-enhanced chemical vapor deposition reactors. Origins Working at Standard Telecommunication Laboratories (STL), Harlow, Essex, R C G Swann discovered that RF discharge promoted the deposition of silicon compounds onto the quartz glass vessel wall. Several internal STL publications were followed in 1964 by French, British and US patent applications. An article was published in the August 1965 volume of Solid State Electronics. Swann attending to his original prototype glow discharge equipment in the laboratory at STL Harlow, Essex in the 1960s. It represented a breakthrough in the deposition of thin films of amorphous silicon, silicon nitride, silicon dioxide at temperatures significantly lower than that deposited by pyrolytic chemistry. Film examples and applications Plasma deposition is often used in semiconductor manufacturing to deposit films conformally (covering sidewalls) and onto wafers containing metal layers or other temperature-sensitive structures. PECVD also yields some of the fastest deposition rates while maintaining film quality (such as roughness, defects/voids), as compared with sputter deposition and thermal/electron-beam evaporation, often at the expense of uniformity. Silicon dioxide deposition Silicon dioxide can be deposited using a combination of silicon precursor gasses like dichlorosilane or silane and oxygen precursors, such as oxygen and nitrous oxide, typically at pressures from a few millitorr to a few torr. Plasma-deposited silicon nitride, formed from silane and ammonia or nitrogen, is also widely used, although it is important to note that it is not possible to deposit a pure nitride in this fashion. Plasma nitrides always contain a large amount of hydrogen, which can be bonded to silicon (Si-H) or nitrogen (Si-NH); this hydrogen has an important influence on IR and UV absorption, stability, mechanical stress, and electrical conductivity. This is often used as a surface and bulk passivating layer for commercial multicrystalline silicon photovoltaic cells. Silicon dioxide can also be deposited from a tetraethylorthosilicate (TEOS) silicon precursor in an oxygen or oxygen-argon plasma. These films can be contaminated with significant carbon and hydrogen as silanol, and can be unstable in air. Pressures of a few torr and small electrode spacings, and/or dual frequency deposition, are helpful to achieve high deposition rates with good film stability. High-density plasma deposition of silicon dioxide from silane and oxygen/argon has been widely used to create a nearly hydrogen-free film with good conformality over complex surfaces, the latter resulting from intense ion bombardment and consequent sputtering of the deposited molecules from vertical onto horizontal surfaces. See also Low-energy plasma-enhanced chemical vapor deposition References Chemical vapor deposition Plasma processing Semiconductor device fabrication Thin film deposition
Plasma-enhanced chemical vapor deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
1,665
[ "Microtechnology", "Thin film deposition", "Coatings", "Thin films", "Semiconductor device fabrication", "Chemical vapor deposition", "Planes (geometry)", "Solid state engineering" ]
8,606,325
https://en.wikipedia.org/wiki/Kutta%E2%80%93Joukowski%20theorem
The Kutta–Joukowski theorem is a fundamental theorem in aerodynamics used for the calculation of lift of an airfoil (and any two-dimensional body including circular cylinders) translating in a uniform fluid at a constant speed so large that the flow seen in the body-fixed frame is steady and unseparated. The theorem relates the lift generated by an airfoil to the speed of the airfoil through the fluid, the density of the fluid and the circulation around the airfoil. The circulation is defined as the line integral around a closed loop enclosing the airfoil of the component of the velocity of the fluid tangent to the loop. It is named after Martin Kutta and Nikolai Zhukovsky (or Joukowski) who first developed its key ideas in the early 20th century. Kutta–Joukowski theorem is an inviscid theory, but it is a good approximation for real viscous flow in typical aerodynamic applications. Kutta–Joukowski theorem relates lift to circulation much like the Magnus effect relates side force (called Magnus force) to rotation. However, the circulation here is not induced by rotation of the airfoil. The fluid flow in the presence of the airfoil can be considered to be the superposition of a translational flow and a rotating flow. This rotating flow is induced by the effects of camber, angle of attack and the sharp trailing edge of the airfoil. It should not be confused with a vortex like a tornado encircling the airfoil. At a large distance from the airfoil, the rotating flow may be regarded as induced by a line vortex (with the rotating line perpendicular to the two-dimensional plane). In the derivation of the Kutta–Joukowski theorem the airfoil is usually mapped onto a circular cylinder. In many textbooks, the theorem is proved for a circular cylinder and the Joukowski airfoil, but it holds true for general airfoils. Lift force formula The theorem applies to two-dimensional inviscid flow flow around an airfoil section (or any shape of infinite span). The lift per unit span of the airfoil is given by where and are the fluid density and the fluid velocity far upstream of the airfoil, and is the circulation defined as the line integral around a closed contour enclosing the airfoil and followed in the negative (clockwise) direction. As explained below, this path must be in a region of potential flow and not in the boundary layer of the cylinder. The integrand is the component of the local fluid velocity in the direction tangent to the curve , and is an infinitesimal length on the curve . Equation is a form of the Kutta–Joukowski theorem. Kuethe and Schetzer state the Kutta–Joukowski theorem as follows: The force per unit length acting on a right cylinder of any cross section whatsoever is equal to and is perpendicular to the direction of Circulation and the Kutta condition A lift-producing airfoil either has camber or operates at a positive angle of attack, the angle between the chord line and the fluid flow far upstream of the airfoil. Moreover, the airfoil must have a sharp trailing edge. Any real fluid is viscous, which implies that the fluid velocity vanishes on the airfoil. Prandtl showed that for large Reynolds number, defined as , and small angle of attack, the flow around a thin airfoil is composed of a narrow viscous region called the boundary layer near the body and an inviscid flow region outside. In applying the Kutta-Joukowski theorem, the loop must be chosen outside this boundary layer. (For example, the circulation calculated using the loop corresponding to the surface of the airfoil would be zero for a viscous fluid.) The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoil. This is known as the Kutta condition. Kutta and Joukowski showed that for computing the pressure and lift of a thin airfoil for flow at large Reynolds number and small angle of attack, the flow can be assumed inviscid in the entire region outside the airfoil provided the Kutta condition is imposed. This is known as the potential flow theory and works remarkably well in practice. Derivation Two derivations are presented below. The first is a heuristic argument, based on physical insight. The second is a formal and technical one, requiring basic vector analysis and complex analysis. Heuristic argument For a heuristic argument, consider a thin airfoil of chord and infinite span, moving through air of density . Let the airfoil be inclined to the oncoming flow to produce an air speed on one side of the airfoil, and an air speed on the other side. The circulation is then The difference in pressure between the two sides of the airfoil can be found by applying Bernoulli's equation: so the downward force on the air, per unit span, is and the upward force (lift) on the airfoil is A differential version of this theorem applies on each element of the plate and is the basis of thin-airfoil theory. Formal derivation Lift forces for more complex situations The lift predicted by the Kutta-Joukowski theorem within the framework of inviscid potential flow theory is quite accurate, even for real viscous flow, provided the flow is steady and unseparated. In deriving the Kutta–Joukowski theorem, the assumption of irrotational flow was used. When there are free vortices outside of the body, as may be the case for a large number of unsteady flows, the flow is rotational. When the flow is rotational, more complicated theories should be used to derive the lift forces. Below are several important examples. Impulsively started flow at small angle of attack For an impulsively started flow such as obtained by suddenly accelerating an airfoil or setting an angle of attack, there is a vortex sheet continuously shed at the trailing edge and the lift force is unsteady or time-dependent. For small angle of attack starting flow, the vortex sheet follows a planar path, and the curve of the lift coefficient as function of time is given by the Wagner function. In this case the initial lift is one half of the final lift given by the Kutta–Joukowski formula. The lift attains 90% of its steady state value when the wing has traveled a distance of about seven chord lengths. Impulsively started flow at large angle of attack When the angle of attack is high enough, the trailing edge vortex sheet is initially in a spiral shape and the lift is singular (infinitely large) at the initial time. The lift drops for a very short time period before the usually assumed monotonically increasing lift curve is reached. Starting flow at large angle of attack for wings with sharp leading edges If, as for a flat plate, the leading edge is also sharp, then vortices also shed at the leading edge and the role of leading edge vortices is two-fold: 1) they are lift increasing when they are still close to the leading edge, so that they elevate the Wagner lift curve, and 2) they are detrimental to lift when they are convected to the trailing edge, inducing a new trailing edge vortex spiral moving in the lift decreasing direction. For this type of flow a vortex force line (VFL) map can be used to understand the effect of the different vortices in a variety of situations (including more situations than starting flow) and may be used to improve vortex control to enhance or reduce the lift. The vortex force line map is a two dimensional map on which vortex force lines are displayed. For a vortex at any point in the flow, its lift contribution is proportional to its speed, its circulation and the cosine of the angle between the streamline and the vortex force line. Hence the vortex force line map clearly shows whether a given vortex is lift producing or lift detrimental. Lagally theorem When a (mass) source is fixed outside the body, a force correction due to this source can be expressed as the product of the strength of outside source and the induced velocity at this source by all the causes except this source. This is known as the Lagally theorem. For two-dimensional inviscid flow, the classical Kutta Joukowski theorem predicts a zero drag. When, however, there is vortex outside the body, there is a vortex induced drag, in a form similar to the induced lift. Generalized Lagally theorem For free vortices and other bodies outside one body without bound vorticity and without vortex production, a generalized Lagally theorem holds, with which the forces are expressed as the products of strength of inner singularities (image vortices, sources and doublets inside each body) and the induced velocity at these singularities by all causes except those inside this body. The contribution due to each inner singularity sums up to give the total force. The motion of outside singularities also contributes to forces, and the force component due to this contribution is proportional to the speed of the singularity. Individual force of each body for multiple-body rotational flow When in addition to multiple free vortices and multiple bodies, there are bound vortices and vortex production on the body surface, the generalized Lagally theorem still holds, but a force due to vortex production exists. This vortex production force is proportional to the vortex production rate and the distance between the vortex pair in production. With this approach, an explicit and algebraic force formula, taking into account of all causes (inner singularities, outside vortices and bodies, motion of all singularities and bodies, and vortex production) holds individually for each body with the role of other bodies represented by additional singularities. Hence a force decomposition according to bodies is possible. General three-dimensional viscous flow For general three-dimensional, viscous and unsteady flow, force formulas are expressed in integral forms. The volume integration of certain flow quantities, such as vorticity moments, is related to forces. Various forms of integral approach are now available for unbounded domain and for artificially truncated domain. The Kutta Joukowski theorem can be recovered from these approaches when applied to a two-dimensional airfoil and when the flow is steady and unseparated. Lifting line theory for wings, wing-tip vortices and induced drag A wing has a finite span, and the circulation at any section of the wing varies with the spanwise direction. This variation is compensated by the release of streamwise vortices, called trailing vortices, due to conservation of vorticity or Kelvin Theorem of Circulation Conservation. These streamwise vortices merge to two counter-rotating strong spirals separated by distance close to the wingspan and their cores may be visible if relative humidity is high. Treating the trailing vortices as a series of semi-infinite straight line vortices leads to the well-known lifting line theory. By this theory, the wing has a lift force smaller than that predicted by a purely two-dimensional theory using the Kutta–Joukowski theorem. This is due to the upstream effects of the trailing vortices' added downwash on the angle of attack of the wing. This reduces the wing's effective angle of attack, decreasing the amount of lift produced at a given angle of attack and requiring a higher angle of attack to recover this lost lift. At this new higher angle of attack, drag has also increased. Induced drag effectively reduces the slope of the lift curve of a 2-D airfoil and increases the angle of attack of (while also decreasing the value of ). See also Horseshoe vortex References Bibliography Milne-Thomson, L.M. (1973) Theoretical Aerodynamics, Dover Publications Inc, New York Aircraft aerodynamics Eponymous theorems of physics Fluid dynamics Physics theorems Aircraft wing design
Kutta–Joukowski theorem
[ "Physics", "Chemistry", "Engineering" ]
2,449
[ "Equations of physics", "Chemical engineering", "Eponymous theorems of physics", "Piping", "Physics theorems", "Fluid dynamics" ]
39
https://en.wikipedia.org/wiki/Albedo
Albedo ( ; ) is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation). Surface albedo is defined as the ratio of radiosity Je to the irradiance Ee (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth's surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun). While directional-hemispherical reflectance factor is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages. Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4–0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation). Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change. Albedo is an important concept in climate science. Terrestrial albedo Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds. Earth's surface albedo is regularly estimated via Earth observation satellite sensors such as NASA's MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo. Earth's average surface temperature due to its albedo and the greenhouse effect is currently about . If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below . If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about . In contrast, if the entire Earth was covered by water – a so-called ocean planet – the average temperature on the planet would rise to almost . In 2021, scientists reported that Earth dimmed by ~0.5% over two decades (1998–2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend. White-sky, black-sky, and blue-sky albedo For land surfaces, it has been shown that the albedo at a particular solar zenith angle θi can be approximated by the proportionate sum of two terms: the directional-hemispherical reflectance at that solar zenith angle, , sometimes referred to as black-sky albedo, and the bi-hemispherical reflectance, , sometimes referred to as white-sky albedo. with being the proportion of direct radiation from a given solar angle, and being the proportion of diffuse illumination, the actual albedo (also called blue-sky albedo) can then be given as: This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface. Changes to albedo due to human activities Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. Human impacts to "the physical properties of the land surface can perturb the climate by altering the Earth’s radiative energy balance" even on a small scale or when undetected by satellites. Urbanization generally decreases albedo (commonly being 0.01–0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate the urban heat island effect. An estimate in 2022 found that on a global scale, "an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing ~44 Gt of CO2 emissions." Intentionally enhancing the albedo of the Earth's surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved. The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that "CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation" and can reduce surface temperature increases associated with climate change. Examples of terrestrial albedo effects Illumination Albedo is not directly dependent on the illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth's surface at that location (e.g. through melting of reflective ice). However, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics. Insolation effects The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes. Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect. Climate and weather Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather. The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds. Albedo–temperature feedback When an area's albedo changes due to snowfall, a snow–temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow–temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating. Snow Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica, snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (referred to as the ice–albedo positive feedback). In Switzerland, the citizens have been protecting their glaciers with large white tarpaulins to slow down the ice melt. These large white sheets are helping to reject the rays from the sun and defecting the heat. Although this method is very expensive, it has been shown to work, reducing snow and ice melt by 60%. Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming. Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets. The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions. Small-scale effects Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing. Solar photovoltaic effects Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications. Trees Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo). In the case of evergreen forests with seasonal snow cover, albedo reduction may be significant enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts as strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate. Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit. In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy. Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming. Research in 2023, drawing from 176 flux stations globally, revealed a climate trade-off: increased carbon uptake from afforestation results in reduced albedo. Initially, this reduction may lead to moderate global warming over a span of approximately 20 years, but it is expected to transition into significant cooling thereafter. Water Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations. At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle. Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light. Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh 'black' ice exhibits Fresnel reflection. Snow on top of this sea ice increases the albedo to 0.9. Clouds Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth." Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as colder than temperatures several miles away under clear skies. Aerosol effects Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth's radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain. Black carbon Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m−2, with a range +0.1 to +0.4 W m−2. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo. Astronomical albedo In astronomy, the term albedo can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved. Optical or visual albedo The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle ("phase angle"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids. Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds. The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies. Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion. In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation. An important relationship between an object's astronomical (geometric) albedo, absolute magnitude and diameter is given by: where is the astronomical albedo, is the diameter in kilometers, and is the absolute magnitude. Radar albedo In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, , , or (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power. Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering. For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo): where the denominator is the effective cross-sectional area of the target object with mean radius, . A smooth metallic sphere would have . Radar albedos of Solar System objects The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references. Relationship to surface bulk density In the event that most of the echo is from first surface reflections ( or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships: . History The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work Photometria. See also Bio-geoengineering Cool roof Daisyworld Emissivity Exitance Global dimming Ice–albedo feedback Irradiance Kirchhoff's law of thermal radiation Opposition surge Polar see-saw Radar astronomy Solar radiation management References External links Albedo Project Albedo – Encyclopedia of Earth NASA MODIS BRDF/albedo product site Ocean surface albedo look-up-table Surface albedo derived from Meteosat observations A discussion of Lunar albedos reflectivity of metals (chart) Land surface effects on climate Climate change feedbacks Climate forcing Climatology Electromagnetic radiation Meteorological quantities Radiometry Scattering, absorption and radiative transfer (optics) Radiation 1760s neologisms
Albedo
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
5,083
[ "Transport phenomena", "Physical phenomena", " absorption and radiative transfer (optics)", "Telecommunications engineering", "Physical quantities", "Electromagnetic radiation", "Quantity", "Meteorological quantities", "Waves", "Scattering", "Radiation", "Radiometry" ]
612
https://en.wikipedia.org/wiki/Arithmetic%20mean
In mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the mean or average (when the context is clear) is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results from an experiment, an observational study, or a survey. The term "arithmetic mean" is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic. In addition to mathematics and statistics, the arithmetic mean is frequently used in economics, anthropology, history, and almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population. While the arithmetic mean is often used to report central tendencies, it is not a robust statistic: it is greatly influenced by outliers (values much larger or smaller than most others). For skewed distributions, such as the distribution of income for which a few people's incomes are substantially higher than most people's, the arithmetic mean may not coincide with one's notion of "middle". In that case, robust statistics, such as the median, may provide a better description of central tendency. Definition The arithmetic mean of a set of observed data is equal to the sum of the numerical values of each observation, divided by the total number of observations. Symbolically, for a data set consisting of the values , the arithmetic mean is defined by the formula: (For an explanation of the summation operator, see summation.) In simpler terms, the formula for the arithmetic mean is: For example, if the monthly salaries of employees are , then the arithmetic mean is: If the data set is a statistical population (i.e., consists of every possible observation and not just a subset of them), then the mean of that population is called the population mean and denoted by the Greek letter . If the data set is a statistical sample (a subset of the population), it is called the sample mean (which for a data set is denoted as ). The arithmetic mean can be similarly defined for vectors in multiple dimensions, not only scalar values; this is often referred to as a centroid. More generally, because the arithmetic mean is a convex combination (meaning its coefficients sum to ), it can be defined on a convex space, not only a vector space. History The statistician Churchill Eisenhart, senior researcher fellow at the U. S. National Bureau of Standards, traced the history of the arithmetic mean in detail. In the modern age it started to be used as a way of combining various observations that should be identical, but were not such as estimates of the direction of magnetic north. In 1635 the mathematician Henry Gellibrand described as “meane” the midpoint of a lowest and highest number, not quite the arithmetic mean. In 1668, a person known as “DB” was quoted in the Transactions of the Royal Society describing “taking the mean” of five values: Motivating properties The arithmetic mean has several properties that make it interesting, especially as a measure of central tendency. These include: If numbers have mean , then . Since is the distance from a given number to the mean, one way to interpret this property is by saying that the numbers to the left of the mean are balanced by the numbers to the right. The mean is the only number for which the residuals (deviations from the estimate) sum to zero. This can also be interpreted as saying that the mean is translationally invariant in the sense that for any real number , . If it is required to use a single number as a "typical" value for a set of known numbers , then the arithmetic mean of the numbers does this best since it minimizes the sum of squared deviations from the typical value: the sum of . The sample mean is also the best single predictor because it has the lowest root mean squared error. If the arithmetic mean of a population of numbers is desired, then the estimate of it that is unbiased is the arithmetic mean of a sample drawn from the population. The arithmetic mean is independent of scale of the units of measurement, in the sense that So, for example, calculating a mean of liters and then converting to gallons is the same as converting to gallons first and then calculating the mean. This is also called first order homogeneity. Additional properties The arithmetic mean of a sample is always between the largest and smallest values in that sample. The arithmetic mean of any amount of equal-sized number groups together is the arithmetic mean of the arithmetic means of each group. Contrast with median The arithmetic mean may be contrasted with the median. The median is defined such that no more than half the values are larger, and no more than half are smaller than it. If elements in the data increase arithmetically when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample . The mean is , as is the median. However, when we consider a sample that cannot be arranged to increase arithmetically, such as , the median and arithmetic average can differ significantly. In this case, the arithmetic average is , while the median is . The average value can vary considerably from most values in the sample and can be larger or smaller than most. There are applications of this phenomenon in many fields. For example, since the 1980s, the median income in the United States has increased more slowly than the arithmetic average of income. Generalizations Weighted average A weighted average, or weighted mean, is an average in which some data points count more heavily than others in that they are given more weight in the calculation. For example, the arithmetic mean of and is , or equivalently . In contrast, a weighted mean in which the first number receives, for example, twice as much weight as the second (perhaps because it is assumed to appear twice as often in the general population from which these numbers were sampled) would be calculated as . Here the weights, which necessarily sum to one, are and , the former being twice the latter. The arithmetic mean (sometimes called the "unweighted average" or "equally weighted average") can be interpreted as a special case of a weighted average in which all weights are equal to the same number ( in the above example and in a situation with numbers being averaged). Continuous probability distributions If a numerical property, and any sample of data from it, can take on any value from a continuous range instead of, for example, just integers, then the probability of a number falling into some range of possible values can be described by integrating a continuous probability distribution across this range, even when the naive probability for a sample number taking one certain value from infinitely many is zero. In this context, the analog of a weighted average, in which there are infinitely many possibilities for the precise value of the variable in each range, is called the mean of the probability distribution. The most widely encountered probability distribution is called the normal distribution; it has the property that all measures of its central tendency, including not just the mean but also the median mentioned above and the mode (the three Ms), are equal. This equality does not hold for other probability distributions, as illustrated for the log-normal distribution here. Angles Particular care is needed when using cyclic data, such as phases or angles. Taking the arithmetic mean of 1° and 359° yields a result of 180°. This is incorrect for two reasons: Firstly, angle measurements are only defined up to an additive constant of 360° ( or , if measuring in radians). Thus, these could easily be called 1° and -1°, or 361° and 719°, since each one of them produces a different average. Secondly, in this situation, 0° (or 360°) is geometrically a better average value: there is lower dispersion about it (the points are both 1° from it and 179° from 180°, the putative average). In general application, such an oversight will lead to the average value artificially moving towards the middle of the numerical range. A solution to this problem is to use the optimization formulation (that is, define the mean as the central point: the point about which one has the lowest dispersion) and redefine the difference as a modular distance (i.e., the distance on the circle: so the modular distance between 1° and 359° is 2°, not 358°). Symbols and encoding The arithmetic mean is often denoted by a bar (vinculum or macron), as in . Some software (text processors, web browsers) may not display the "x̄" symbol correctly. For example, the HTML symbol "x̄" combines two codes — the base letter "x" plus a code for the line above ( ̄ or ¯). In some document formats (such as PDF), the symbol may be replaced by a "¢" (cent) symbol when copied to a text processor such as Microsoft Word. See also Fréchet mean Generalized mean Inequality of arithmetic and geometric means Sample mean and covariance Standard deviation Standard error of the mean Summary statistics Notes References Further reading External links Calculations and comparisons between arithmetic mean and geometric mean of two numbers Calculate the arithmetic mean of a series of numbers on fxSolver Means
Arithmetic mean
[ "Physics", "Mathematics" ]
1,903
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Symmetry" ]
639
https://en.wikipedia.org/wiki/Alkane
In organic chemistry, an alkane, or paraffin (a historical trivial name that also has other meanings), is an acyclic saturated hydrocarbon. In other words, an alkane consists of hydrogen and carbon atoms arranged in a tree structure in which all the carbon–carbon bonds are single. Alkanes have the general chemical formula . The alkanes range in complexity from the simplest case of methane (), where n = 1 (sometimes called the parent molecule), to arbitrarily large and complex molecules, like pentacontane () or 6-ethyl-2-methyl-5-(1-methylethyl) octane, an isomer of tetradecane (). The International Union of Pure and Applied Chemistry (IUPAC) defines alkanes as "acyclic branched or unbranched hydrocarbons having the general formula , and therefore consisting entirely of hydrogen atoms and saturated carbon atoms". However, some sources use the term to denote any saturated hydrocarbon, including those that are either monocyclic (i.e. the cycloalkanes) or polycyclic, despite them having a distinct general formula (e.g. cycloalkanes are ). In an alkane, each carbon atom is sp3-hybridized with 4 sigma bonds (either C–C or C–H), and each hydrogen atom is joined to one of the carbon atoms (in a C–H bond). The longest series of linked carbon atoms in a molecule is known as its carbon skeleton or carbon backbone. The number of carbon atoms may be considered as the size of the alkane. One group of the higher alkanes are waxes, solids at standard ambient temperature and pressure (SATP), for which the number of carbon atoms in the carbon backbone is greater than about 17. With their repeated – units, the alkanes constitute a homologous series of organic compounds in which the members differ in molecular mass by multiples of 14.03 u (the total mass of each such methylene-bridge unit, which comprises a single carbon atom of mass 12.01 u and two hydrogen atoms of mass ~1.01 u each). Methane is produced by methanogenic bacteria and some long-chain alkanes function as pheromones in certain animal species or as protective waxes in plants and fungi. Nevertheless, most alkanes do not have much biological activity. They can be viewed as molecular trees upon which can be hung the more active/reactive functional groups of biological molecules. The alkanes have two main commercial sources: petroleum (crude oil) and natural gas. An alkyl group is an alkane-based molecular fragment that bears one open valence for bonding. They are generally abbreviated with the symbol for any organyl group, R, although Alk is sometimes used to specifically symbolize an alkyl group (as opposed to an alkenyl group or aryl group). Structure and classification Ordinarily the C-C single bond distance is . Saturated hydrocarbons can be linear, branched, or cyclic. The third group is sometimes called cycloalkanes. Very complicated structures are possible by combining linear, branch, cyclic alkanes. Isomerism Alkanes with more than three carbon atoms can be arranged in various ways, forming structural isomers. The simplest isomer of an alkane is the one in which the carbon atoms are arranged in a single chain with no branches. This isomer is sometimes called the n-isomer (n for "normal", although it is not necessarily the most common). However, the chain of carbon atoms may also be branched at one or more points. The number of possible isomers increases rapidly with the number of carbon atoms. For example, for acyclic alkanes: C1: methane only C2: ethane only C3: propane only C4: 2 isomers: butane and isobutane C5: 3 isomers: pentane, isopentane, and neopentane C6: 5 isomers: hexane, 2-methylpentane, 3-methylpentane, 2,2-dimethylbutane, and 2,3-dimethylbutane C7: 9 isomers: heptane, 2-methylhexane, 3-methylhexane, 2,2-dimethylpentane, 2,3-dimethylpentane, 2,4-dimethylpentane, 3,3-dimethylpentane, 3-ethylpentane, 2,2,3-trimethylbutane C8: 18 isomers: octane, 2-methylheptane, 3-methylheptane, 4-methylheptane, 2,2-dimethylhexane, 2,3-dimethylhexane, 2,4-dimethylhexane, 2,5-dimethylhexane, 3,3-dimethylhexane, 3,4-dimethylhexane, 3-ethylhexane, 2,2,3-trimethylpentane, 2,2,4-trimethylpentane, 2,3,3-trimethylpentane, 2,3,4-trimethylpentane, 3-ethyl-2-methylpentane, 3-ethyl-3-methylpentane, 2,2,3,3-tetramethylbutane C9: 35 isomers C10: 75 isomers C12: 355 isomers C32: 27,711,253,769 isomers C60: 22,158,734,535,770,411,074,184 isomers, many of which are not stable Branched alkanes can be chiral. For example, 3-methylhexane and its higher homologues are chiral due to their stereogenic center at carbon atom number 3. The above list only includes differences of connectivity, not stereochemistry. In addition to the alkane isomers, the chain of carbon atoms may form one or more rings. Such compounds are called cycloalkanes, and are also excluded from the above list because changing the number of rings changes the molecular formula. For example, cyclobutane and methylcyclopropane are isomers of each other (C4H8), but are not isomers of butane (C4H10). Branched alkanes are more thermodynamically stable than their linear (or less branched) isomers. For example, the highly branched 2,2,3,3-tetramethylbutane is about 1.9 kcal/mol more stable than its linear isomer, n-octane. Nomenclature The IUPAC nomenclature (systematic way of naming compounds) for alkanes is based on identifying hydrocarbon chains. Unbranched, saturated hydrocarbon chains are named systematically with a Greek numerical prefix denoting the number of carbons and the suffix "-ane". In 1866, August Wilhelm von Hofmann suggested systematizing nomenclature by using the whole sequence of vowels a, e, i, o and u to create suffixes -ane, -ene, -ine (or -yne), -one, -une, for the hydrocarbons CnH2n+2, CnH2n, CnH2n−2, CnH2n−4, CnH2n−6. In modern nomenclature, the first three specifically name hydrocarbons with single, double and triple bonds; while "-one" now represents a ketone. Linear alkanes Straight-chain alkanes are sometimes indicated by the prefix "n-" or "n-"(for "normal") where a non-linear isomer exists. Although this is not strictly necessary and is not part of the IUPAC naming system, the usage is still common in cases where one wishes to emphasize or distinguish between the straight-chain and branched-chain isomers, e.g., "n-butane" rather than simply "butane" to differentiate it from isobutane. Alternative names for this group used in the petroleum industry are linear paraffins or n-paraffins. The first eight members of the series (in terms of number of carbon atoms) are named as follows: methane CH4 – one carbon and 4 hydrogen ethane C2H6 – two carbon and 6 hydrogen propane C3H8 – three carbon and 8 hydrogen butane C4H10 – four carbon and 10 hydrogen pentane C5H12 – five carbon and 12 hydrogen hexane C6H14 – six carbon and 14 hydrogen heptane C7H16 – seven carbons and 16 hydrogen octane C8H18 – eight carbons and 18 hydrogen The first four names were derived from methanol, ether, propionic acid and butyric acid. Alkanes with five or more carbon atoms are named by adding the suffix -ane to the appropriate numerical multiplier prefix with elision of any terminal vowel (-a or -o) from the basic numerical term. Hence, pentane, C5H12; hexane, C6H14; heptane, C7H16; octane, C8H18; etc. The numeral prefix is generally Greek; however, alkanes with a carbon atom count ending in nine, for example nonane, use the Latin prefix non-. Branched alkanes Simple branched alkanes often have a common name using a prefix to distinguish them from linear alkanes, for example n-pentane, isopentane, and neopentane. IUPAC naming conventions can be used to produce a systematic name. The key steps in the naming of more complicated branched alkanes are as follows: Identify the longest continuous chain of carbon atoms Name this longest root chain using standard naming rules Name each side chain by changing the suffix of the name of the alkane from "-ane" to "-yl" Number the longest continuous chain in order to give the lowest possible numbers for the side-chains Number and name the side chains before the name of the root chain If there are multiple side chains of the same type, use prefixes such as "di-" and "tri-" to indicate it as such, and number each one. Add side chain names in alphabetical (disregarding "di-" etc. prefixes) order in front of the name of the root chain Saturated cyclic hydrocarbons Though technically distinct from the alkanes, this class of hydrocarbons is referred to by some as the "cyclic alkanes." As their description implies, they contain one or more rings. Simple cycloalkanes have a prefix "cyclo-" to distinguish them from alkanes. Cycloalkanes are named as per their acyclic counterparts with respect to the number of carbon atoms in their backbones, e.g., cyclopentane (C5H10) is a cycloalkane with 5 carbon atoms just like pentane (C5H12), but they are joined up in a five-membered ring. In a similar manner, propane and cyclopropane, butane and cyclobutane, etc. Substituted cycloalkanes are named similarly to substituted alkanes – the cycloalkane ring is stated, and the substituents are according to their position on the ring, with the numbering decided by the Cahn–Ingold–Prelog priority rules. Trivial/common names The trivial (non-systematic) name for alkanes is 'paraffins'. Together, alkanes are known as the 'paraffin series'. Trivial names for compounds are usually historical artifacts. They were coined before the development of systematic names, and have been retained due to familiar usage in industry. Cycloalkanes are also called naphthenes. Branched-chain alkanes are called isoparaffins. "Paraffin" is a general term and often does not distinguish between pure compounds and mixtures of isomers, i.e., compounds of the same chemical formula, e.g., pentane and isopentane. In IUPAC The following trivial names are retained in the IUPAC system: isobutane for 2-methylpropane isopentane for 2-methylbutane neopentane for 2,2-dimethylpropane. Non-IUPAC Some non-IUPAC trivial names are occasionally used: cetane, for hexadecane cerane, for hexacosane Physical properties All alkanes are colorless. Alkanes with the lowest molecular weights are gases, those of intermediate molecular weight are liquids, and the heaviest are waxy solids. Table of alkanes Boiling point Alkanes experience intermolecular van der Waals forces. The cumulative effects of these intermolecular forces give rise to greater boiling points of alkanes. Two factors influence the strength of the van der Waals forces: the number of electrons surrounding the molecule, which increases with the alkane's molecular weight the surface area of the molecule Under standard conditions, from CH4 to C4H10 alkanes are gaseous; from C5H12 to C17H36 they are liquids; and after C18H38 they are solids. As the boiling point of alkanes is primarily determined by weight, it should not be a surprise that the boiling point has an almost linear relationship with the size (molecular weight) of the molecule. As a rule of thumb, the boiling point rises 20–30 °C for each carbon added to the chain; this rule applies to other homologous series. A straight-chain alkane will have a boiling point higher than a branched-chain alkane due to the greater surface area in contact, and thus greater van der Waals forces, between adjacent molecules. For example, compare isobutane (2-methylpropane) and n-butane (butane), which boil at −12 and 0 °C, and 2,2-dimethylbutane and 2,3-dimethylbutane which boil at 50 and 58 °C, respectively. On the other hand, cycloalkanes tend to have higher boiling points than their linear counterparts due to the locked conformations of the molecules, which give a plane of intermolecular contact. Melting points The melting points of the alkanes follow a similar trend to boiling points for the same reason as outlined above. That is, (all other things being equal) the larger the molecule the higher the melting point. However, alkanes' melting points follow a more complex pattern, due to variations in the properties of their solid crystals. One difference in crystal structure that even-numbered alkanes (from hexane onwards) tend to form denser-packed crystals compared to their odd-numbered neighbors. This causes them to have a greater enthalpy of fusion (amount of energy required to melt them), raising their melting point. A second difference in crystal structure is that even-numbered alkanes (from octane onwards) tend to form more rotationally-ordered crystals compared to their odd-numbered neighbors. This causes them to have a greater entropy of fusion (increase in disorder from the solid to the liquid state), lowering their melting point. While these effects operate in opposing directions, the first effect tends to be slightly stronger, leading even-numbered alkanes to have slightly higher melting points than the average of their odd-numbered neighbors. This trend does not apply to methane, which has an unusually high melting point, higher than both ethane and propane. This is because it has a very low entropy of fusion, attributable to its high molecular symmetry and the rotational disorder in solid methane near its melting point (Methane I). The melting points of branched-chain alkanes can be either higher or lower than those of the corresponding straight-chain alkanes, again depending on these two factors. More symmetric alkanes tend towards higher melting points, due to enthalpic effects when they form ordered crystals, and entropic effects when they form disordered crystals (e.g. neopentane). Conductivity and solubility Alkanes do not conduct electricity in any way, nor are they substantially polarized by an electric field. For this reason, they do not form hydrogen bonds and are insoluble in polar solvents such as water. Since the hydrogen bonds between individual water molecules are aligned away from an alkane molecule, the coexistence of an alkane and water leads to an increase in molecular order (a reduction in entropy). As there is no significant bonding between water molecules and alkane molecules, the second law of thermodynamics suggests that this reduction in entropy should be minimized by minimizing the contact between alkane and water: Alkanes are said to be hydrophobic as they are insoluble in water. Their solubility in nonpolar solvents is relatively high, a property that is called lipophilicity. Alkanes are, for example, miscible in all proportions among themselves. The density of the alkanes usually increases with the number of carbon atoms but remains less than that of water. Hence, alkanes form the upper layer in an alkane–water mixture. Molecular geometry The molecular structure of the alkanes directly affects their physical and chemical characteristics. It is derived from the electron configuration of carbon, which has four valence electrons. The carbon atoms in alkanes are described as sp3 hybrids; that is to say that, to a good approximation, the valence electrons are in orbitals directed towards the corners of a tetrahedron which are derived from the combination of the 2s orbital and the three 2p orbitals. Geometrically, the angle between the bonds are cos−1(−) ≈ 109.47°. This is exact for the case of methane, while larger alkanes containing a combination of C–H and C–C bonds generally have bonds that are within several degrees of this idealized value. Bond lengths and bond angles An alkane has only C–H and C–C single bonds. The former result from the overlap of an sp3 orbital of carbon with the 1s orbital of a hydrogen; the latter by the overlap of two sp3 orbitals on adjacent carbon atoms. The bond lengths amount to 1.09 × 10−10 m for a C–H bond and 1.54 × 10−10 m for a C–C bond. The spatial arrangement of the bonds is similar to that of the four sp3 orbitals—they are tetrahedrally arranged, with an angle of 109.47° between them. Structural formulae that represent the bonds as being at right angles to one another, while both common and useful, do not accurately depict the geometry. Conformation The spatial arrangement of the C-C and C-H bonds are described by the torsion angles of the molecule is known as its conformation. In ethane, the simplest case for studying the conformation of alkanes, there is nearly free rotation about a carbon–carbon single bond. Two limiting conformations are important: eclipsed conformation and staggered conformation. The staggered conformation is 12.6 kJ/mol (3.0 kcal/mol) lower in energy (more stable) than the eclipsed conformation (the least stable). In highly branched alkanes, the bond angle may differ from the optimal value (109.5°) to accommodate bulky groups. Such distortions introduce a tension in the molecule, known as steric hindrance or strain. Strain substantially increases reactivity. Spectroscopic properties Spectroscopic signatures for alkanes are obtainable by the major characterization techniques. Infrared spectroscopy The C-H stretching mode gives a strong absorptions between 2850 and 2960 cm−1 and weaker bands for the C-C stretching mode absorbs between 800 and 1300 cm−1. The carbon–hydrogen bending modes depend on the nature of the group: methyl groups show bands at 1450 cm−1 and 1375 cm−1, while methylene groups show bands at 1465 cm−1 and 1450 cm−1. Carbon chains with more than four carbon atoms show a weak absorption at around 725 cm−1. NMR spectroscopy The proton resonances of alkanes are usually found at δH = 0.5–1.5. The carbon-13 resonances depend on the number of hydrogen atoms attached to the carbon: δC = 8–30 (primary, methyl, –CH3), 15–55 (secondary, methylene, –CH2–), 20–60 (tertiary, methyne, C–H) and quaternary. The carbon-13 resonance of quaternary carbon atoms is characteristically weak, due to the lack of nuclear Overhauser effect and the long relaxation time, and can be missed in weak samples, or samples that have not been run for a sufficiently long time. Mass spectrometry Since alkanes have high ionization energies, their electron impact mass spectra show weak currents for their molecular ions. The fragmentation pattern can be difficult to interpret, but in the case of branched chain alkanes, the carbon chain is preferentially cleaved at tertiary or quaternary carbons due to the relative stability of the resulting free radicals. The mass spectra for straight-chain alkanes is illustrated by that for dodecane: the fragment resulting from the loss of a single methyl group (M − 15) is absent, fragments are more intense than the molecular ion and are spaced by intervals of 14 mass units, corresponding to loss of CH2 groups. Chemical properties Alkanes are only weakly reactive with most chemical compounds. They only reacts with the strongest of electrophilic reagents by virtue of their strong C–H bonds (~100 kcal/mol) and C–C bonds (~90 kcal/mol). They are also relatively unreactive toward free radicals. This inertness is the source of the term paraffins (with the meaning here of "lacking affinity"). In crude oil the alkane molecules have remained chemically unchanged for millions of years. Acid-base behavior The acid dissociation constant (pKa) values of all alkanes are estimated to range from 50 to 70, depending on the extrapolation method, hence they are extremely weak acids that are practically inert to bases (see: carbon acids). They are also extremely weak bases, undergoing no observable protonation in pure sulfuric acid (H0 ~ −12), although superacids that are at least millions of times stronger have been known to protonate them to give hypercoordinate alkanium ions (see: methanium ion). Thus, a mixture of antimony pentafluoride (SbF5) and fluorosulfonic acid (HSO3F), called magic acid, can protonate alkanes. Reactions with oxygen (combustion reaction) All alkanes react with oxygen in a combustion reaction, although they become increasingly difficult to ignite as the number of carbon atoms increases. The general equation for complete combustion is: CnH2n+2 + (n + ) O2 → (n + 1) H2O + n CO2 or CnH2n+2 + () O2 → (n + 1) H2O + n CO2 In the absence of sufficient oxygen, carbon monoxide or even soot can be formed, as shown below: CnH2n+2 + (n + ) O2 → (n + 1) H2O + n CO CnH2n+2 + (n + ) O2 → (n + 1) H2O + n C For example, methane: 2 CH4 + 3 O2 → 4 H2O + 2 CO CH4 + O2 → 2 H2O + C See the alkane heat of formation table for detailed data. The standard enthalpy change of combustion, ΔcH⊖, for alkanes increases by about 650 kJ/mol per CH2 group. Branched-chain alkanes have lower values of ΔcH⊖ than straight-chain alkanes of the same number of carbon atoms, and so can be seen to be somewhat more stable. Biodegradation Some organisms are capable of metalbolizing alkanes. The methane monooxygenases convert methane to methanol. For higher alkanes, cytochrome P450 convert alkanes to alcohols, which are then susceptible to degradation. Free radical reactions Free radicals, molecules with unpaired electrons, play a large role in most reactions of alkanes. Free radical halogenation reactions occur with halogens, leading to the production of haloalkanes. The hydrogen atoms of the alkane are progressively replaced by halogen atoms. The reaction of alkanes and fluorine is highly exothermic and can lead to an explosion. These reactions are an important industrial route to halogenated hydrocarbons. There are three steps: Initiation the halogen radicals form by homolysis. Usually, energy in the form of heat or light is required. Chain reaction or Propagation then takes place—the halogen radical abstracts a hydrogen from the alkane to give an alkyl radical. This reacts further. Chain termination where the radicals recombine. Experiments have shown that all halogenation produces a mixture of all possible isomers, indicating that all hydrogen atoms are susceptible to reaction. The mixture produced, however, is not statistical: Secondary and tertiary hydrogen atoms are preferentially replaced due to the greater stability of secondary and tertiary free-radicals. An example can be seen in the monobromination of propane: In the Reed reaction, sulfur dioxide and chlorine convert hydrocarbons to sulfonyl chlorides under the influence of light. Under some conditions, alkanes will undergo Nitration. C-H activation Certain transition metal complexes promote non-radical reactions with alkanes, resulting in so C–H bond activation reactions. Cracking Cracking breaks larger molecules into smaller ones. This reaction requires heat and catalysts. The thermal cracking process follows a homolytic mechanism with formation of free radicals. The catalytic cracking process involves the presence of acid catalysts (usually solid acids such as silica-alumina and zeolites), which promote a heterolytic (asymmetric) breakage of bonds yielding pairs of ions of opposite charges, usually a carbocation. Carbon-localized free radicals and cations are both highly unstable and undergo processes of chain rearrangement, C–C scission in position beta (i.e., cracking) and intra- and intermolecular hydrogen transfer or hydride transfer. In both types of processes, the corresponding reactive intermediates (radicals, ions) are permanently regenerated, and thus they proceed by a self-propagating chain mechanism. The chain of reactions is eventually terminated by radical or ion recombination. Isomerization and reformation Dragan and his colleague were the first to report about isomerization in alkanes. Isomerization and reformation are processes in which straight-chain alkanes are heated in the presence of a platinum catalyst. In isomerization, the alkanes become branched-chain isomers. In other words, it does not lose any carbons or hydrogens, keeping the same molecular weight. In reformation, the alkanes become cycloalkanes or aromatic hydrocarbons, giving off hydrogen as a by-product. Both of these processes raise the octane number of the substance. Butane is the most common alkane that is put under the process of isomerization, as it makes many branched alkanes with high octane numbers. Other reactions In steam reforming, alkanes react with steam in the presence of a nickel catalyst to give hydrogen and carbon monoxide. Occurrence Occurrence of alkanes in the Universe Alkanes form a small portion of the atmospheres of the outer gas planets such as Jupiter (0.1% methane, 2 ppm ethane), Saturn (0.2% methane, 5 ppm ethane), Uranus (1.99% methane, 2.5 ppm ethane) and Neptune (1.5% methane, 1.5 ppm ethane). Titan (1.6% methane), a satellite of Saturn, was examined by the Huygens probe, which indicated that Titan's atmosphere periodically rains liquid methane onto the moon's surface. Also on Titan, the Cassini mission has imaged seasonal methane/ethane lakes near the polar regions of Titan. Methane and ethane have also been detected in the tail of the comet Hyakutake. Chemical analysis showed that the abundances of ethane and methane were roughly equal, which is thought to imply that its ices formed in interstellar space, away from the Sun, which would have evaporated these volatile molecules. Alkanes have also been detected in meteorites such as carbonaceous chondrites. Occurrence of alkanes on Earth Traces of methane gas (about 0.0002% or 1745 ppb) occur in the Earth's atmosphere, produced primarily by methanogenic microorganisms, such as Archaea in the gut of ruminants. The most important commercial sources for alkanes are natural gas and oil. Natural gas contains primarily methane and ethane, with some propane and butane: oil is a mixture of liquid alkanes and other hydrocarbons. These hydrocarbons were formed when marine animals and plants (zooplankton and phytoplankton) died and sank to the bottom of ancient seas and were covered with sediments in an anoxic environment and converted over many millions of years at high temperatures and high pressure to their current form. Natural gas resulted thereby for example from the following reaction: C6H12O6 → 3 CH4 + 3 CO2 These hydrocarbon deposits, collected in porous rocks trapped beneath impermeable cap rocks, comprise commercial oil fields. They have formed over millions of years and once exhausted cannot be readily replaced. The depletion of these hydrocarbons reserves is the basis for what is known as the energy crisis. Alkanes have a low solubility in water, so the content in the oceans is negligible; however, at high pressures and low temperatures (such as at the bottom of the oceans), methane can co-crystallize with water to form a solid methane clathrate (methane hydrate). Although this cannot be commercially exploited at the present time, the amount of combustible energy of the known methane clathrate fields exceeds the energy content of all the natural gas and oil deposits put together. Methane extracted from methane clathrate is, therefore, a candidate for future fuels. Biological occurrence Aside from petroleum and natural gas, alkanes occur significantly in nature only as methane, which is produced by some archaea by the process of methanogenesis. These organisms are found in the gut of termites and cows. The methane is produced from carbon dioxide or other organic compounds. Energy is released by the oxidation of hydrogen: CO2 + 4 H2 → CH4 + 2 H2O It is probable that our current deposits of natural gas were formed in a similar way. Certain types of bacteria can metabolize alkanes: they prefer even-numbered carbon chains as they are easier to degrade than odd-numbered chains. Alkanes play a negligible role in higher organisms, with rare exception. Some yeasts, e.g., Candida tropicale, Pichia sp., Rhodotorula sp., can use alkanes as a source of carbon or energy. The fungus Amorphotheca resinae prefers the longer-chain alkanes in aviation fuel, and can cause serious problems for aircraft in tropical regions. In plants, the solid long-chain alkanes are found in the plant cuticle and epicuticular wax of many species, but are only rarely major constituents. They protect the plant against water loss, prevent the leaching of important minerals by the rain, and protect against bacteria, fungi, and harmful insects. The carbon chains in plant alkanes are usually odd-numbered, between 27 and 33 carbon atoms in length, and are made by the plants by decarboxylation of even-numbered fatty acids. The exact composition of the layer of wax is not only species-dependent but also changes with the season and such environmental factors as lighting conditions, temperature or humidity. The Jeffrey pine is noted for producing exceptionally high levels of n-heptane in its resin, for which reason its distillate was designated as the zero point for one octane rating. Floral scents have also long been known to contain volatile alkane components, and n-nonane is a significant component in the scent of some roses. Emission of gaseous and volatile alkanes such as ethane, pentane, and hexane by plants has also been documented at low levels, though they are not generally considered to be a major component of biogenic air pollution. Edible vegetable oils also typically contain small fractions of biogenic alkanes with a wide spectrum of carbon numbers, mainly 8 to 35, usually peaking in the low to upper 20s, with concentrations up to dozens of milligrams per kilogram (parts per million by weight) and sometimes over a hundred for the total alkane fraction. Alkanes are found in animal products, although they are less important than unsaturated hydrocarbons. One example is the shark liver oil, which is approximately 14% pristane (2,6,10,14-tetramethylpentadecane, C19H40). They are important as pheromones, chemical messenger materials, on which insects depend for communication. In some species, e.g. the support beetle Xylotrechus colonus, pentacosane (C25H52), 3-methylpentaicosane (C26H54) and 9-methylpentaicosane (C26H54) are transferred by body contact. With others like the tsetse fly Glossina morsitans morsitans, the pheromone contains the four alkanes 2-methylheptadecane (C18H38), 17,21-dimethylheptatriacontane (C39H80), 15,19-dimethylheptatriacontane (C39H80) and 15,19,23-trimethylheptatriacontane (C40H82), and acts by smell over longer distances. Waggle-dancing honey bees produce and release two alkanes, tricosane and pentacosane. Ecological relations One example, in which both plant and animal alkanes play a role, is the ecological relationship between the sand bee (Andrena nigroaenea) and the early spider orchid (Ophrys sphegodes); the latter is dependent for pollination on the former. Sand bees use pheromones in order to identify a mate; in the case of A. nigroaenea, the females emit a mixture of tricosane (C23H48), pentacosane (C25H52) and heptacosane (C27H56) in the ratio 3:3:1, and males are attracted by specifically this odor. The orchid takes advantage of this mating arrangement to get the male bee to collect and disseminate its pollen; parts of its flower not only resemble the appearance of sand bees but also produce large quantities of the three alkanes in the same ratio as female sand bees. As a result, numerous males are lured to the blooms and attempt to copulate with their imaginary partner: although this endeavor is not crowned with success for the bee, it allows the orchid to transfer its pollen, which will be dispersed after the departure of the frustrated male to other blooms. Production Petroleum refining The most important source of alkanes is natural gas and crude oil. Alkanes are separated in an oil refinery by fractional distillation. Unsaturated hydrocarbons are converted to alkanes by hydrogenation: (R = alkyl) Another route to alkanes is hydrogenolysis, which entails cleavage of C-heteroatom bonds using hydrogen. In industry, the main substrates are organonitrogen and organosulfur impurities, i.e. the heteroatoms are N and S. The specific processes are called hydrodenitrification and hydrodesulfurization: Hydrogenolysis can be applied to the conversion of virtually any functional group into hydrocarbons. Substrates include haloalkanes, alcohols, aldehydes, ketones, carboxylic acids, etc. Both hydrogenolysis and hydrogenation are practiced in refineries. The can be effected by using lithium aluminium hydride, Clemmenson reduction and other specialized routes. Coal Coal is a more traditional precursor to alkanes. A wide range of technologies have been intensively practiced for centuries. Simply heating coal gives alkanes, leaving behind coke. Relevant technologies include the Bergius process and coal liquifaction. Partial combustion of coal and related solid organic compounds generates carbon monoxide, which can be hydrogenated using the Fischer–Tropsch process. This technology allows the synthesize liquid hydrocarbons, including alkanes. This method is used to produce substitutes for petroleum distillates. Laboratory preparation Rarely is there any interest in the synthesis of alkanes, since they are usually commercially available and less valued than virtually any precursor. The best-known method is hydrogenation of alkenes. Many C-X bonds can be converted to C-H bonds using lithium aluminium hydride, Clemmenson reduction, and other specialized routes. Hydrolysis of Alkyl Grignard reagents and alkyl lithium compounds gives alkanes. Applications Fuels The dominant use of alkanes is as fuels. Propane and butane, easily liquified gases, are commonly known as liquified petroleum gas (LPG). From pentane to octane the alkanes are highly volatile liquids. They are used as fuels in internal combustion engines, as they vaporize easily on entry into the combustion chamber without forming droplets, which would impair the uniformity of the combustion. Branched-chain alkanes are preferred as they are much less prone to premature ignition, which causes knocking, than their straight-chain homologues. This propensity to premature ignition is measured by the octane rating of the fuel, where 2,2,4-trimethylpentane (isooctane) has an arbitrary value of 100, and heptane has a value of zero. Apart from their use as fuels, the middle alkanes are also good solvents for nonpolar substances. Alkanes from nonane to, for instance, hexadecane (an alkane with sixteen carbon atoms) are liquids of higher viscosity, less and less suitable for use in gasoline. They form instead the major part of diesel and aviation fuel. Diesel fuels are characterized by their cetane number, cetane being an old name for hexadecane. However, the higher melting points of these alkanes can cause problems at low temperatures and in polar regions, where the fuel becomes too thick to flow correctly. Precursors to chemicals By the process of cracking, alkanes can be converted to alkenes. Simple alkenes are precursors to polymers, such as polyethylene and polypropylene. When the cracking is taken to extremes, alkanes can be converted to carbon black, which is a significant tire component. Chlorination of methane gives chloromethanes, which are used as solvents and building blocks for complex compounds. Similarly treatment of methane with sulfur gives carbon disulfide. Still other chemicals are prepared by reaction with sulfur trioxide and nitric oxide Other Some light hydrocarbons are used as aerosol sprays. Alkanes from hexadecane upwards form the most important components of fuel oil and lubricating oil. In the latter function, they work at the same time as anti-corrosive agents, as their hydrophobic nature means that water cannot reach the metal surface. Many solid alkanes find use as paraffin wax, for example, in candles. This should not be confused however with true wax, which consists primarily of esters. Alkanes with a chain length of approximately 35 or more carbon atoms are found in bitumen, used, for example, in road surfacing. However, the higher alkanes have little value and are usually split into lower alkanes by cracking. Hazards Alkanes are highly flammable, but they have low toxicities. Methane "is toxicologically virtually inert." Alkanes can be asphyxiants and narcotic. See also Alkene Alkyne Cycloalkane Higher alkanes Aliphatic compound Notes References Further reading Virtual Textbook of Organic Chemistry Visualizations of the low-temperature crystal structures of alkanes (methane to nonane) Hydrocarbons
Alkane
[ "Chemistry" ]
8,738
[ "Organic compounds", "Hydrocarbons", "Alkanes" ]
657
https://en.wikipedia.org/wiki/Bitumen
Bitumen ( , ) is an immensely viscous constituent of petroleum. Depending on its exact composition it can be a sticky, black liquid or an apparently solid mass that behaves as a liquid over very large time scales. In American English, the material is commonly referred to as asphalt. Whether found in natural deposits or refined from petroleum, the substance is classed as a pitch. Prior to the 20th century, the term asphaltum was in general use. The word derives from the Ancient Greek word (), which referred to natural bitumen or pitch. The largest natural deposit of bitumen in the world is the Pitch Lake of southwest Trinidad, which is estimated to contain 10 million tons. About 70% of annual bitumen production is destined for road construction, its primary use. In this application, bitumen is used to bind aggregate particles like gravel and forms a substance referred to as asphalt concrete, which is colloquially termed asphalt. Its other main uses lie in bituminous waterproofing products, such as roofing felt and roof sealant. In material sciences and engineering, the terms asphalt and bitumen are often used interchangeably and refer both to natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term bitumen for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, bitumen is the prevalent term in much of the world; however, in American English, asphalt is more commonly used. To help avoid confusion, the terms "liquid asphalt", "asphalt binder", or "asphalt cement" are used in the U.S. to distinguish it from asphalt concrete. Colloquially, various forms of bitumen are sometimes referred to as "tar", as in the name of the La Brea Tar Pits. Naturally occurring bitumen is sometimes specified by the term crude bitumen. Its viscosity is similar to that of cold molasses while the material obtained from the fractional distillation of crude oil boiling at is sometimes referred to as "refined bitumen". The Canadian province of Alberta has most of the world's reserves of natural bitumen in the Athabasca oil sands, which cover , an area larger than England. Terminology Etymology The Latin word traces to the Proto-Indo-European root *gʷet- "pitch". The expression "bitumen" originated in the Sanskrit, where we find the words "jatu", meaning "pitch", and "jatu-krit", meaning "pitch creating", "pitch producing" (referring to coniferous or resinous trees). The Latin equivalent is claimed by some to be originally "gwitu-men" (pertaining to pitch), and by others, "pixtumens" (exuding or bubbling pitch), which was subsequently shortened to "bitumen", thence passing via French into English. From the same root is derived the Anglo Saxon word "cwidu" (Mastix), the German word "Kitt" (cement or mastic) and the old Norse word "kvada". The word "ašphalt" is claimed to have been derived from the Accadian term "asphaltu" or "sphallo", meaning "to split". It was later adopted by the Homeric Greeks in the form of the adjective ἄσφαλἤς, ἐς signifying "firm", "stable", "secure", and the corresponding verb ἄσφαλίξω, ίσω meaning "to make firm or stable", "to secure". The word "asphalt" is derived from the late Middle English, in turn from French asphalte, based on Late Latin asphalton, asphaltum, which is the latinisation of the Greek (ásphaltos, ásphalton), a word meaning "asphalt/bitumen/pitch", which perhaps derives from , "not, without", i.e. the alpha privative, and (sphallein), "to cause to fall, baffle, (in passive) err, (in passive) be balked of". The first use of asphalt by the ancients was as a cement to secure or join various objects, and it thus seems likely that the name itself was expressive of this application. Specifically, Herodotus mentioned that bitumen was brought to Babylon to build its gigantic fortification wall. From the Greek, the word passed into late Latin, and thence into French (asphalte) and English ("asphaltum" and "asphalt"). In French, the term asphalte is used for naturally occurring asphalt-soaked limestone deposits, and for specialised manufactured products with fewer voids or greater bitumen content than the "asphaltic concrete" used to pave roads. Modern terminology Bitumen mixed with clay was usually called "asphaltum", but the term is less commonly used today. In American English, "asphalt" is equivalent to the British "bitumen". However, "asphalt" is also commonly used as a shortened form of "asphalt concrete" (therefore equivalent to the British "asphalt" or "tarmac"). In Canadian English, the word "bitumen" is used to refer to the vast Canadian deposits of extremely heavy crude oil, while "asphalt" is used for the oil refinery product. Diluted bitumen (diluted with naphtha to make it flow in pipelines) is known as "dilbit" in the Canadian petroleum industry, while bitumen "upgraded" to synthetic crude oil is known as "syncrude", and syncrude blended with bitumen is called "synbit". "Bitumen" is still the preferred geological term for naturally occurring deposits of the solid or semi-solid form of petroleum. "Bituminous rock" is a form of sandstone impregnated with bitumen. The oil sands of Alberta, Canada are a similar material. Neither of the terms "asphalt" or "bitumen" should be confused with tar or coal tars. Tar is the thick liquid product of the dry distillation and pyrolysis of organic hydrocarbons primarily sourced from vegetation masses, whether fossilized as with coal, or freshly harvested. The majority of bitumen, on the other hand, was formed naturally when vast quantities of organic animal materials were deposited by water and buried hundreds of metres deep at the diagenetic point, where the disorganized fatty hydrocarbon molecules joined in long chains in the absence of oxygen. Bitumen occurs as a solid or highly viscous liquid. It may even be mixed in with coal deposits. Bitumen, and coal using the Bergius process, can be refined into petrols such as gasoline, and bitumen may be distilled into tar, not the other way around. Composition Normal composition The components of bitumen include four main classes of compounds: Naphthene aromatics (naphthalene), consisting of partially hydrogenated polycyclic aromatic compounds Polar aromatics, consisting of high molecular weight phenols and carboxylic acids produced by partial oxidation of the material Saturated hydrocarbons; the percentage of saturated compounds in asphalt correlates with its softening point Asphaltenes, consisting of high molecular weight phenols and heterocyclic compounds Bitumen typically contains, elementally 80% by weight of carbon; 10% hydrogen; up to 6% sulfur; and molecularly, between 5 and 25% by weight of asphaltenes dispersed in 90% to 65% maltenes. Most natural bitumens also contain organosulfur compounds, nickel and vanadium are found at <10 parts per million, as is typical of some petroleum. The substance is soluble in carbon disulfide. It is commonly modelled as a colloid, with asphaltenes as the dispersed phase and maltenes as the continuous phase. "It is almost impossible to separate and identify all the different molecules of bitumen, because the number of molecules with different chemical structure is extremely large". Asphalt may be confused with coal tar, which is a visually similar black, thermoplastic material produced by the destructive distillation of coal. During the early and mid-20th century, when town gas was produced, coal tar was a readily available byproduct and extensively used as the binder for road aggregates. The addition of coal tar to macadam roads led to the word "tarmac", which is now used in common parlance to refer to road-making materials. However, since the 1970s, when natural gas succeeded town gas, bitumen has completely overtaken the use of coal tar in these applications. Other examples of this confusion include La Brea Tar Pits and the Canadian tar sands, both of which actually contain natural bitumen rather than tar. "Pitch" is another term sometimes informally used at times to refer to asphalt, as in Pitch Lake. Additives, mixtures and contaminants For economic and other reasons, bitumen is sometimes sold combined with other materials, often without being labeled as anything other than simply "bitumen". Of particular note is the use of re-refined engine oil bottoms – "REOB" or "REOBs"the residue of recycled automotive engine oil collected from the bottoms of re-refining vacuum distillation towers, in the manufacture of asphalt. REOB contains various elements and compounds found in recycled engine oil: additives to the original oil and materials accumulating from its circulation in the engine (typically iron and copper). Some research has indicated a correlation between this adulteration of bitumen and poorer-performing pavement. Occurrence The majority of bitumen used commercially is obtained from petroleum. Nonetheless, large amounts of bitumen occur in concentrated form in nature. Naturally occurring deposits of bitumen are formed from the remains of ancient, microscopic algae (diatoms) and other once-living things. These natural deposits of bitumen have been formed during the Carboniferous period, when giant swamp forests dominated many parts of the Earth. They were deposited in the mud on the bottom of the ocean or lake where the organisms lived. Under the heat (above 50°C) and pressure of burial deep in the earth, the remains were transformed into materials such as bitumen, kerogen, or petroleum. Natural deposits of bitumen include lakes such as the Pitch Lake in Trinidad and Tobago and Lake Bermudez in Venezuela. Natural seeps occur in the La Brea Tar Pits and the McKittrick Tar Pits in California, as well as in the Dead Sea. Bitumen also occurs in unconsolidated sandstones known as "oil sands" in Alberta, Canada, and the similar "tar sands" in Utah, US. The Canadian province of Alberta has most of the world's reserves, in three huge deposits covering , an area larger than England or New York state. These bituminous sands contain of commercially established oil reserves, giving Canada the third largest oil reserves in the world. Although historically it was used without refining to pave roads, nearly all of the output is now used as raw material for oil refineries in Canada and the United States. The world's largest deposit of natural bitumen, known as the Athabasca oil sands, is located in the McMurray Formation of Northern Alberta. This formation is from the early Cretaceous, and is composed of numerous lenses of oil-bearing sand with up to 20% oil. Isotopic studies show the oil deposits to be about 110 million years old. Two smaller but still very large formations occur in the Peace River oil sands and the Cold Lake oil sands, to the west and southeast of the Athabasca oil sands, respectively. Of the Alberta deposits, only parts of the Athabasca oil sands are shallow enough to be suitable for surface mining. The other 80% has to be produced by oil wells using enhanced oil recovery techniques like steam-assisted gravity drainage. Much smaller heavy oil or bitumen deposits also occur in the Uinta Basin in Utah, US. The Tar Sand Triangle deposit, for example, is roughly 6% bitumen. Bitumen may occur in hydrothermal veins. An example of this is within the Uinta Basin of Utah, in the US, where there is a swarm of laterally and vertically extensive veins composed of a solid hydrocarbon termed Gilsonite. These veins formed by the polymerization and solidification of hydrocarbons that were mobilized from the deeper oil shales of the Green River Formation during burial and diagenesis. Bitumen is similar to the organic matter in carbonaceous meteorites. However, detailed studies have shown these materials to be distinct. The vast Alberta bitumen resources are considered to have started out as living material from marine plants and animals, mainly algae, that died millions of years ago when an ancient ocean covered Alberta. They were covered by mud, buried deeply over time, and gently cooked into oil by geothermal heat at a temperature of . Due to pressure from the rising of the Rocky Mountains in southwestern Alberta, 80 to 55 million years ago, the oil was driven northeast hundreds of kilometres and trapped into underground sand deposits left behind by ancient river beds and ocean beaches, thus forming the oil sands. History Paleolithic times Bitumen use goes back to the Middle Paleolithic, where it was shaped into tool handles or used as an adhesive for attaching stone tools to hafts. The earliest evidence of bitumen use was discovered when archeologists identified bitumen material on Levallois flint artefacts that date to about 71,000 years BP at the Umm el Tlel open-air site, located on the northern slope of the Qdeir Plateau in el Kowm Basin in Central Syria. Microscopic analyses found bituminous residue on two-thirds of the stone artefacts, suggesting that bitumen was an important and frequently-used component of tool making for people in that region at that time. Geochemical analyses of the asphaltic residues places its source to localized natural bitumen outcroppings in the Bichri Massif, about 40 km northeast of the Umm el Tlel archeological site. A re-examination of artifacts uncovered in 1908 at Le Moustier rock shelters in France has identified Mousterian stone tools that were attached to grips made of ochre and bitumen. The grips were formulated with 55% ground goethite ochre and 45% cooked liquid bitumen to create a moldable putty that hardened into handles. Earlier, less-careful excavations at Le Moustier prevent conclusive identification of the archaeological culture and age, but the European Mousterian style of these tools suggests they are associated with Neanderthals during the late Middle Paleolithic into the early Upper Paleolithic between 60,000 and 35,000 years before present. It is the earliest evidence of multicomponent adhesive in Europe. Ancient times The use of natural bitumen for waterproofing and as an adhesive dates at least to the fifth millennium BC, with a crop storage basket discovered in Mehrgarh, of the Indus Valley civilization, lined with it. By the 3rd millennium BC refined rock asphalt was in use in the region, and was used to waterproof the Great Bath in Mohenjo-daro. In the ancient Near East, the Sumerians used natural bitumen deposits for mortar between bricks and stones, to cement parts of carvings, such as eyes, into place, for ship caulking, and for waterproofing. The Greek historian Herodotus said hot bitumen was used as mortar in the walls of Babylon. The long Euphrates Tunnel beneath the river Euphrates at Babylon in the time of Queen Semiramis () was reportedly constructed of burnt bricks covered with bitumen as a waterproofing agent. Bitumen was used by ancient Egyptians to embalm mummies. The Persian word for asphalt is moom, which is related to the English word mummy. The Egyptians' primary source of bitumen was the Dead Sea, which the Romans knew as Palus Asphaltites (Asphalt Lake). In approximately 40 AD, Dioscorides described the Dead Sea material as Judaicum bitumen, and noted other places in the region where it could be found. The Sidon bitumen is thought to refer to material found at Hasbeya in Lebanon. Pliny also refers to bitumen being found in Epirus. Bitumen was a valuable strategic resource. It was the object of the first known battle for a hydrocarbon deposit – between the Seleucids and the Nabateans in 312 BC. In the ancient Far East, natural bitumen was slowly boiled to get rid of the higher fractions, leaving a thermoplastic material of higher molecular weight that, when layered on objects, became hard upon cooling. This was used to cover objects that needed waterproofing, such as scabbards and other items. Statuettes of household deities were also cast with this type of material in Japan, and probably also in China. In North America, archaeological recovery has indicated that bitumen was sometimes used to adhere stone projectile points to wooden shafts. In Canada, aboriginal people used bitumen seeping out of the banks of the Athabasca and other rivers to waterproof birch bark canoes, and also heated it in smudge pots to ward off mosquitoes in the summer. Bitumen was also used to waterproof plank canoes used by indigenous peoples in pre-colonial southern California. Continental Europe In 1553, Pierre Belon described in his work Observations that pissasphalto, a mixture of pitch and bitumen, was used in the Republic of Ragusa (now Dubrovnik, Croatia) for tarring of ships. An 1838 edition of Mechanics Magazine cites an early use of asphalt in France. A pamphlet dated 1621, by "a certain Monsieur d'Eyrinys, states that he had discovered the existence (of asphaltum) in large quantities in the vicinity of Neufchatel", and that he proposed to use it in a variety of ways – "principally in the construction of air-proof granaries, and in protecting, by means of the arches, the water-courses in the city of Paris from the intrusion of dirt and filth", which at that time made the water unusable. "He expatiates also on the excellence of this material for forming level and durable terraces" in palaces, "the notion of forming such terraces in the streets not one likely to cross the brain of a Parisian of that generation". But the substance was generally neglected in France until the revolution of 1830. In the 1830s there was a surge of interest, and asphalt became widely used "for pavements, flat roofs, and the lining of cisterns, and in England, some use of it had been made of it for similar purposes". Its rise in Europe was "a sudden phenomenon", after natural deposits were found "in France at Osbann (Bas-Rhin), the Parc (Ain) and the Puy-de-la-Poix (Puy-de-Dôme)", although it could also be made artificially. One of the earliest uses in France was the laying of about 24,000 square yards of Seyssel asphalt at the Place de la Concorde in 1835. United Kingdom Among the earlier uses of bitumen in the United Kingdom was for etching. William Salmon's Polygraphice (1673) provides a recipe for varnish used in etching, consisting of three ounces of virgin wax, two ounces of mastic, and one ounce of asphaltum. By the fifth edition in 1685, he had included more asphaltum recipes from other sources. The first British patent for the use of asphalt was "Cassell's patent asphalte or bitumen" in 1834. Then on 25 November 1837, Richard Tappin Claridge patented the use of Seyssel asphalt (patent #7849), for use in asphalte pavement, having seen it employed in France and Belgium when visiting with Frederick Walter Simms, who worked with him on the introduction of asphalt to Britain. Dr T. Lamb Phipson writes that his father, Samuel Ryland Phipson, a friend of Claridge, was also "instrumental in introducing the asphalte pavement (in 1836)". Claridge obtained a patent in Scotland on 27 March 1838, and obtained a patent in Ireland on 23 April 1838. In 1851, extensions for the 1837 patent and for both 1838 patents were sought by the trustees of a company previously formed by Claridge. Claridge's Patent Asphalte Companyformed in 1838 for the purpose of introducing to Britain "Asphalte in its natural state from the mine at Pyrimont Seysell in France","laid one of the first asphalt pavements in Whitehall". Trials were made of the pavement in 1838 on the footway in Whitehall, the stable at Knightsbridge Barracks, "and subsequently on the space at the bottom of the steps leading from Waterloo Place to St. James Park". "The formation in 1838 of Claridge's Patent Asphalte Company (with a distinguished list of aristocratic patrons, and Marc and Isambard Brunel as, respectively, a trustee and consulting engineer), gave an enormous impetus to the development of a British asphalt industry". "By the end of 1838, at least two other companies, Robinson's and the Bastenne company, were in production", with asphalt being laid as paving at Brighton, Herne Bay, Canterbury, Kensington, the Strand, and a large floor area in Bunhill-row, while meantime Claridge's Whitehall paving "continue(d) in good order". The Bonnington Chemical Works manufactured asphalt using coal tar and by 1839 had installed it in Bonnington. In 1838, there was a flurry of entrepreneurial activity involving bitumen, which had uses beyond paving. For example, bitumen could also be used for flooring, damp proofing in buildings, and for waterproofing of various types of pools and baths, both of which were also proliferating in the 19th century. One of the earliest surviving examples of its use can be seen at Highgate Cemetery where it was used in 1839 to seal the roof of the terrace catacombs. On the London stockmarket, there were various claims as to the exclusivity of bitumen quality from France, Germany and England. And numerous patents were granted in France, with similar numbers of patent applications being denied in England due to their similarity to each other. In England, "Claridge's was the type most used in the 1840s and 50s". In 1914, Claridge's Company entered into a joint venture to produce tar-bound macadam, with materials manufactured through a subsidiary company called Clarmac Roads Ltd. Two products resulted, namely Clarmac, and Clarphalte, with the former being manufactured by Clarmac Roads and the latter by Claridge's Patent Asphalte Co., although Clarmac was more widely used. However, the First World War ruined the Clarmac Company, which entered into liquidation in 1915. The failure of Clarmac Roads Ltd had a flow-on effect to Claridge's Company, which was itself compulsorily wound up, ceasing operations in 1917, having invested a substantial amount of funds into the new venture, both at the outset and in a subsequent attempt to save the Clarmac Company. Bitumen was thought in 19th century Britain to contain chemicals with medicinal properties. Extracts from bitumen were used to treat catarrh and some forms of asthma and as a remedy against worms, especially the tapeworm. United States The first use of bitumen in the New World was by aboriginal peoples. On the west coast, as early as the 13th century, the Tongva, Luiseño and Chumash peoples collected the naturally occurring bitumen that seeped to the surface above underlying petroleum deposits. All three groups used the substance as an adhesive. It is found on many different artifacts of tools and ceremonial items. For example, it was used on rattles to adhere gourds or turtle shells to rattle handles. It was also used in decorations. Small round shell beads were often set in asphaltum to provide decorations. It was used as a sealant on baskets to make them watertight for carrying water, possibly poisoning those who drank the water. Asphalt was used also to seal the planks on ocean-going canoes. Asphalt was first used to pave streets in the 1870s. At first naturally occurring "bituminous rock" was used, such as at Ritchie Mines in Macfarlan in Ritchie County, West Virginia from 1852 to 1873. In 1876, asphalt-based paving was used to pave Pennsylvania Avenue in Washington DC, in time for the celebration of the national centennial. In the horse-drawn era, US streets were mostly unpaved and covered with dirt or gravel. Especially where mud or trenching often made streets difficult to pass, pavements were sometimes made of diverse materials including wooden planks, cobble stones or other stone blocks, or bricks. Unpaved roads produced uneven wear and hazards for pedestrians. In the late 19th century with the rise of the popular bicycle, bicycle clubs were important in pushing for more general pavement of streets. Advocacy for pavement increased in the early 20th century with the rise of the automobile. Asphalt gradually became an ever more common method of paving. St. Charles Avenue in New Orleans was paved its whole length with asphalt by 1889. In 1900, Manhattan alone had 130,000 horses, pulling streetcars, wagons, and carriages, and leaving their waste behind. They were not fast, and pedestrians could dodge and scramble their way across the crowded streets. Small towns continued to rely on dirt and gravel, but larger cities wanted much better streets. They looked to wood or granite blocks by the 1850s. In 1890, a third of Chicago's 2000 miles of streets were paved, chiefly with wooden blocks, which gave better traction than mud. Brick surfacing was a good compromise, but even better was asphalt paving, which was easy to install and to cut through to get at sewers. With London and Paris serving as models, Washington laid 400,000 square yards of asphalt paving by 1882; it became the model for Buffalo, Philadelphia and elsewhere. By the end of the century, American cities boasted 30 million square yards of asphalt paving, well ahead of brick. The streets became faster and more dangerous so electric traffic lights were installed. Electric trolleys (at 12 miles per hour) became the main transportation service for middle class shoppers and office workers until they bought automobiles after 1945 and commuted from more distant suburbs in privacy and comfort on asphalt highways. Canada Canada has the world's largest deposit of natural bitumen in the Athabasca oil sands, and Canadian First Nations along the Athabasca River had long used it to waterproof their canoes. In 1719, a Cree named Wa-Pa-Su brought a sample for trade to Henry Kelsey of the Hudson's Bay Company, who was the first recorded European to see it. However, it wasn't until 1787 that fur trader and explorer Alexander MacKenzie saw the Athabasca oil sands and said, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance." The value of the deposit was obvious from the start, but the means of extracting the bitumen was not. The nearest town, Fort McMurray, Alberta, was a small fur trading post, other markets were far away, and transportation costs were too high to ship the raw bituminous sand for paving. In 1915, Sidney Ells of the Federal Mines Branch experimented with separation techniques and used the product to pave 600 feet of road in Edmonton, Alberta. Other roads in Alberta were paved with material extracted from oil sands, but it was generally not economic. During the 1920s Dr. Karl A. Clark of the Alberta Research Council patented a hot water oil separation process and entrepreneur Robert C. Fitzsimmons built the Bitumount oil separation plant, which between 1925 and 1958 produced up to per day of bitumen using Dr. Clark's method. Most of the bitumen was used for waterproofing roofs, but other uses included fuels, lubrication oils, printers ink, medicines, rust- and acid-proof paints, fireproof roofing, street paving, patent leather, and fence post preservatives. Eventually Fitzsimmons ran out of money and the plant was taken over by the Alberta government. Today the Bitumount plant is a Provincial Historic Site. Photography and art Bitumen was used in early photographic technology. In 1826, or 1827, it was used by French scientist Joseph Nicéphore Niépce to make the oldest surviving photograph from nature. The bitumen was thinly coated onto a pewter plate which was then exposed in a camera. Exposure to light hardened the bitumen and made it insoluble, so that when it was subsequently rinsed with a solvent only the sufficiently light-struck areas remained. Many hours of exposure in the camera were required, making bitumen impractical for ordinary photography, but from the 1850s to the 1920s it was in common use as a photoresist in the production of printing plates for various photomechanical printing processes. Bitumen was the nemesis of many artists during the 19th century. Although widely used for a time, it ultimately proved unstable for use in oil painting, especially when mixed with the most common diluents, such as linseed oil, varnish and turpentine. Unless thoroughly diluted, bitumen never fully solidifies and will in time corrupt the other pigments with which it comes into contact. The use of bitumen as a glaze to set in shadow or mixed with other colors to render a darker tone resulted in the eventual deterioration of many paintings, for instance those of Delacroix. Perhaps the most famous example of the destructiveness of bitumen is Théodore Géricault's Raft of the Medusa (1818–1819), where his use of bitumen caused the brilliant colors to degenerate into dark greens and blacks and the paint and canvas to buckle. Modern use Global use The vast majority of refined bitumen is used in construction: primarily as a constituent of products used in paving and roofing applications. According to the requirements of the end use, bitumen is produced to specification. This is achieved either by refining or blending. It is estimated that the current world use of bitumen is approximately 102 million tonnes per year. Approximately 85% of all the bitumen produced is used as the binder in asphalt concrete for roads. It is also used in other paved areas such as airport runways, car parks and footways. Typically, the production of asphalt concrete involves mixing fine and coarse aggregates such as sand, gravel and crushed rock with asphalt, which acts as the binding agent. Other materials, such as recycled polymers (e.g., rubber tyres), may be added to the bitumen to modify its properties according to the application for which the bitumen is ultimately intended. A further 10% of global bitumen production is used in roofing applications, where its waterproofing qualities are invaluable. The remaining 5% of bitumen is used mainly for sealing and insulating purposes in a variety of building materials, such as pipe coatings, carpet tile backing and paint. Bitumen is applied in the construction and maintenance of many structures, systems, and components, such as: Highways Airport runways Footways and pedestrian ways Car parks Racetracks Tennis courts Roofing Damp proofing Dams Reservoir and pool linings Soundproofing Pipe coatings Cable coatings Paints Building water proofing Tile underlying waterproofing Newspaper ink production Rolled asphalt concrete The largest use of bitumen is for making asphalt concrete for road surfaces; this accounts for approximately 85% of the bitumen consumed in the United States. There are about 4,000 asphalt concrete mixing plants in the US, and a similar number in Europe. Asphalt concrete pavement mixes are typically composed of 5% bitumen (known as asphalt cement in the US) and 95% aggregates (stone, sand, and gravel). Due to its highly viscous nature, bitumen must be heated so it can be mixed with the aggregates at the asphalt mixing facility. The temperature required varies depending upon characteristics of the bitumen and the aggregates, but warm-mix asphalt technologies allow producers to reduce the temperature required. The weight of an asphalt pavement depends upon the aggregate type, the bitumen, and the air void content. An average example in the United States is about 112 pounds per square yard, per inch of pavement thickness. When maintenance is performed on asphalt pavements, such as milling to remove a worn or damaged surface, the removed material can be returned to a facility for processing into new pavement mixtures. The bitumen in the removed material can be reactivated and put back to use in new pavement mixes. With some 95% of paved roads being constructed of or surfaced with asphalt, a substantial amount of asphalt pavement material is reclaimed each year. According to industry surveys conducted annually by the Federal Highway Administration and the National Asphalt Pavement Association, more than 99% of the bitumen removed each year from road surfaces during widening and resurfacing projects is reused as part of new pavements, roadbeds, shoulders and embankments or stockpiled for future use. Asphalt concrete paving is widely used in airports around the world. Due to the sturdiness and ability to be repaired quickly, it is widely used for runways. Mastic asphalt Mastic asphalt is a type of asphalt that differs from dense graded asphalt (asphalt concrete) in that it has a higher bitumen (binder) content, usually around 7–10% of the whole aggregate mix, as opposed to rolled asphalt concrete, which has only around 5% asphalt. This thermoplastic substance is widely used in the building industry for waterproofing flat roofs and tanking underground. Mastic asphalt is heated to a temperature of and is spread in layers to form an impervious barrier about thick. Bitumen emulsion Bitumen emulsions are colloidal mixtures of bitumen and water. Due to the different surface tensions of the two liquids, stable emulsions cannot be created simply by mixing. Therefore, various emulsifiers and stabilizers are added. Emulsifiers are amphiphilic molecules that differ in the charge of their polar head group. They reduce the surface tension of the emulsion and thus prevent bitumen particles from fusing. The emulsifier charge defines the type of emulsion: anionic (negatively charged) and cationic (positively charged). The concentration of an emulsifier is a critical parameter affecting the size of the bitumen particles—higher concentrations lead to smaller bitumen particles. Thus, emulsifiers have a great impact on the stability, viscosity, breaking strength, and adhesion of the bitumen emulsion. The size of bitumen particles is usually between 0.1 and 50μm with a main fraction between 1μm and 10μm. Laser diffraction techniques can be used to determine the particle size distribution quickly and easily. Cationic emulsifiers primarily include long-chain amines such as imidazolines, amido-amines, and diamines, which acquire a positive charge when an acid is added. Anionic emulsifiers are often fatty acids extracted from lignin, tall oil, or tree resin saponified with bases such as NaOH, which creates a negative charge. During the storage of bitumen emulsions, bitumen particles sediment, agglomerate (flocculation), or fuse (coagulation), which leads to a certain instability of the bitumen emulsion. How fast this process occurs depends on the formulation of the bitumen emulsion but also storage conditions such as temperature and humidity. When emulsified bitumen gets into contact with aggregates, emulsifiers lose their effectiveness, the emulsion breaks down, and an adhering bitumen film is formed referred to as 'breaking'. Bitumen particles almost instantly create a continuous bitumen film by coagulating and separating from water which evaporates. Not each asphalt emulsion reacts as fast as the other when it gets into contact with aggregates. That enables a classification into Rapid-setting (R), Slow-setting (SS), and Medium-setting (MS) emulsions, but also an individual, application-specific optimization of the formulation and a wide field of application (1). For example, Slow-breaking emulsions ensure a longer processing time which is particularly advantageous for fine aggregates (1). Adhesion problems are reported for anionic emulsions in contact with quartz-rich aggregates. They are substituted by cationic emulsions achieving better adhesion. The extensive range of bitumen emulsions is covered insufficiently by standardization. DIN EN 13808 for cationic asphalt emulsions has been existing since July 2005. Here, a classification of bitumen emulsions based on letters and numbers is described, considering charges, viscosities, and the type of bitumen. The production process of bitumen emulsions is very complex. Two methods are commonly used, the "Colloid mill" method and the "High Internal Phase Ratio (HIPR)" method. In the "Colloid mill" method, a rotor moves at high speed within a stator by adding bitumen and a water-emulsifier mixture. The resulting shear forces generate bitumen particles between 5μm and 10μm coated with emulsifiers. The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations. T The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).he "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1). Bitumen emulsions are used in a wide variety of applications. They are used in road construction and building protection and primarily include the application in cold recycling mixtures, adhesive coating, and surface treatment (1). Due to the lower viscosity in comparison to hot bitumen, processing requires less energy and is associated with significantly less risk of fire and burns. Chipseal involves spraying the road surface with bitumen emulsion followed by a layer of crushed rock, gravel or crushed slag. Slurry seal is a mixture of bitumen emulsion and fine crushed aggregate that is spread on the surface of a road. Cold-mixed asphalt can also be made from bitumen emulsion to create pavements similar to hot-mixed asphalt, several inches in depth, and bitumen emulsions are also blended into recycled hot-mix asphalt to create low-cost pavements. Bitumen emulsion based techniques are known to be useful for all classes of roads, their use may also be possible in the following applications: 1. Asphalts for heavily trafficked roads (based on the use of polymer modified emulsions) 2. Warm emulsion based mixtures, to improve both their maturation time and mechanical properties 3. Half-warm technology, in which aggregates are heated up to 100 degrees, producing mixtures with similar properties to those of hot asphalts 4. High performance surface dressing. Synthetic crude oil Synthetic crude oil, also known as syncrude, is the output from a bitumen upgrader facility used in connection with oil sand production in Canada. Bituminous sands are mined using enormous (100-ton capacity) power shovels and loaded into even larger (400-ton capacity) dump trucks for movement to an upgrading facility. The process used to extract the bitumen from the sand is a hot water process originally developed by Dr. Karl Clark of the University of Alberta during the 1920s. After extraction from the sand, the bitumen is fed into a bitumen upgrader which converts it into a light crude oil equivalent. This synthetic substance is fluid enough to be transferred through conventional oil pipelines and can be fed into conventional oil refineries without any further treatment. By 2015 Canadian bitumen upgraders were producing over per day of synthetic crude oil, of which 75% was exported to oil refineries in the United States. In Alberta, five bitumen upgraders produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder of the output being sold as feedstock to nearby oil refineries and petrochemical plants. Non-upgraded crude bitumen Canadian bitumen does not differ substantially from oils such as Venezuelan extra-heavy and Mexican heavy oil in chemical composition, and the real difficulty is moving the extremely viscous bitumen through oil pipelines to the refinery. Many modern oil refineries are extremely sophisticated and can process non-upgraded bitumen directly into products such as gasoline, diesel fuel, and refined asphalt without any preprocessing. This is particularly common in areas such as the US Gulf coast, where refineries were designed to process Venezuelan and Mexican oil, and in areas such as the US Midwest where refineries were rebuilt to process heavy oil as domestic light oil production declined. Given the choice, such heavy oil refineries usually prefer to buy bitumen rather than synthetic oil because the cost is lower, and in some cases because they prefer to produce more diesel fuel and less gasoline. By 2015 Canadian production and exports of non-upgraded bitumen exceeded that of synthetic crude oil at over per day, of which about 65% was exported to the United States. Because of the difficulty of moving crude bitumen through pipelines, non-upgraded bitumen is usually diluted with natural-gas condensate in a form called dilbit or with synthetic crude oil, called synbit. However, to meet international competition, much non-upgraded bitumen is now sold as a blend of multiple grades of bitumen, conventional crude oil, synthetic crude oil, and condensate in a standardized benchmark product such as Western Canadian Select. This sour, heavy crude oil blend is designed to have uniform refining characteristics to compete with internationally marketed heavy oils such as Mexican Mayan or Arabian Dubai Crude. Radioactive waste encapsulation matrix Bitumen was used starting in the 1960s as a hydrophobic matrix aiming to encapsulate radioactive waste such as medium-activity salts (mainly soluble sodium nitrate and sodium sulfate) produced by the reprocessing of spent nuclear fuels or radioactive sludges from sedimentation ponds. Bituminised radioactive waste containing highly radiotoxic alpha-emitting transuranic elements from nuclear reprocessing plants have been produced at industrial scale in France, Belgium and Japan, but this type of waste conditioning has been abandoned because operational safety issues (risks of fire, as occurred in a bituminisation plant at Tokai Works in Japan) and long-term stability problems related to their geological disposal in deep rock formations. One of the main problems is the swelling of bitumen exposed to radiation and to water. Bitumen swelling is first induced by radiation because of the presence of hydrogen gas bubbles generated by alpha and gamma radiolysis. A second mechanism is the matrix swelling when the encapsulated hygroscopic salts exposed to water or moisture start to rehydrate and to dissolve. The high concentration of salt in the pore solution inside the bituminised matrix is then responsible for osmotic effects inside the bituminised matrix. The water moves in the direction of the concentrated salts, the bitumen acting as a semi-permeable membrane. This also causes the matrix to swell. The swelling pressure due to osmotic effect under constant volume can be as high as 200 bar. If not properly managed, this high pressure can cause fractures in the near field of a disposal gallery of bituminised medium-level waste. When the bituminised matrix has been altered by swelling, encapsulated radionuclides are easily leached by the contact of ground water and released in the geosphere. The high ionic strength of the concentrated saline solution also favours the migration of radionuclides in clay host rocks. The presence of chemically reactive nitrate can also affect the redox conditions prevailing in the host rock by establishing oxidizing conditions, preventing the reduction of redox-sensitive radionuclides. Under their higher valences, radionuclides of elements such as selenium, technetium, uranium, neptunium and plutonium have a higher solubility and are also often present in water as non-retarded anions. This makes the disposal of medium-level bituminised waste very challenging. Different types of bitumen have been used: blown bitumen (partly oxidized with air oxygen at high temperature after distillation, and harder) and direct distillation bitumen (softer). Blown bitumens like Mexphalte, with a high content of saturated hydrocarbons, are more easily biodegraded by microorganisms than direct distillation bitumen, with a low content of saturated hydrocarbons and a high content of aromatic hydrocarbons. Concrete encapsulation of radwaste is presently considered a safer alternative by the nuclear industry and the waste management organisations. Other uses Roofing shingles and roll roofing account for most of the remaining bitumen consumption. Other uses include cattle sprays, fence-post treatments, and waterproofing for fabrics. Bitumen is used to make Japan black, a lacquer known especially for its use on iron and steel, and it is also used in paint and marker inks by some exterior paint supply companies to increase the weather resistance and permanence of the paint or ink, and to make the color darker. Bitumen is also used to seal some alkaline batteries during the manufacturing process. Bitumen is also commonly used as a ground in the etching process of intaglio printmaking. Production About 164,000,000 tons were produced in 2019. It is obtained as the "heavy" (i.e., difficult to distill) fraction. Material with a boiling point greater than around 500°C is considered asphalt. Vacuum distillation separates it from the other components in crude oil (such as naphtha, gasoline and diesel). The resulting material is typically further treated to extract small but valuable amounts of lubricants and to adjust the properties of the material to suit applications. In a de-asphalting unit, the crude bitumen is treated with either propane or butane in a supercritical phase to extract the lighter molecules, which are then separated. Further processing is possible by "blowing" the product: namely reacting it with oxygen. This step makes the product harder and more viscous. Bitumen is typically stored and transported at temperatures around . Sometimes diesel oil or kerosene are mixed in before shipping to retain liquidity; upon delivery, these lighter materials are separated out of the mixture. This mixture is often called "bitumen feedstock", or BFS. Some dump trucks route the hot engine exhaust through pipes in the dump body to keep the material warm. The backs of tippers carrying asphalt, as well as some handling equipment, are also commonly sprayed with a releasing agent before filling to aid release. Diesel oil is no longer used as a release agent due to environmental concerns. Oil sands Naturally occurring crude bitumen impregnated in sedimentary rock is the prime feed stock for petroleum production from "oil sands", currently under development in Alberta, Canada. Canada has most of the world's supply of natural bitumen, covering 140,000 square kilometres (an area larger than England), giving it the second-largest proven oil reserves in the world. The Athabasca oil sands are the largest bitumen deposit in Canada and the only one accessible to surface mining, although recent technological breakthroughs have resulted in deeper deposits becoming producible by in situ methods. Because of oil price increases after 2003, producing bitumen became highly profitable, but as a result of the decline after 2014 it became uneconomic to build new plants again. By 2014, Canadian crude bitumen production averaged about per day and was projected to rise to per day by 2020. The total amount of crude bitumen in Alberta that could be extracted is estimated to be about , which at a rate of would last about 200 years. Alternatives and bioasphalt Although uncompetitive economically, bitumen can be made from nonpetroleum-based renewable resources such as sugar, molasses and rice, corn and potato starches. Bitumen can also be made from waste material by fractional distillation of used motor oil, which is sometimes otherwise disposed of by burning or dumping into landfills. Use of motor oil may cause premature cracking in colder climates, resulting in roads that need to be repaved more frequently. Nonpetroleum-based asphalt binders can be made light-colored. Lighter-colored roads absorb less heat from solar radiation, reducing their contribution to the urban heat island effect. Parking lots that use bitumen alternatives are called green parking lots. Albanian deposits Selenizza is a naturally occurring solid hydrocarbon bitumen found in native deposits in Selenice, in Albania, the only European asphalt mine still in use. The bitumen is found in the form of veins, filling cracks in a more or less horizontal direction. The bitumen content varies from 83% to 92% (soluble in carbon disulphide), with a penetration value near to zero and a softening point (ring and ball) around 120°C. The insoluble matter, consisting mainly of silica ore, ranges from 8% to 17%. Albanian bitumen extraction has a long history and was practiced in an organized way by the Romans. After centuries of silence, the first mentions of Albanian bitumen appeared only in 1868, when the Frenchman Coquand published the first geological description of the deposits of Albanian bitumen. In 1875, the exploitation rights were granted to the Ottoman government and in 1912, they were transferred to the Italian company Simsa. Since 1945, the mine was exploited by the Albanian government and from 2001 to date, the management passed to a French company, which organized the mining process for the manufacture of the natural bitumen on an industrial scale. Today the mine is predominantly exploited in an open pit quarry but several of the many underground mines (deep and extending over several km) still remain viable. Selenizza is produced primarily in granular form, after melting the bitumen pieces selected in the mine. Selenizza is mainly used as an additive in the road construction sector. It is mixed with traditional bitumen to improve both the viscoelastic properties and the resistance to ageing. It may be blended with the hot bitumen in tanks, but its granular form allows it to be fed in the mixer or in the recycling ring of normal asphalt plants. Other typical applications include the production of mastic asphalts for sidewalks, bridges, car-parks and urban roads as well as drilling fluid additives for the oil and gas industry. Selenizza is available in powder or in granular material of various particle sizes and is packaged in sacks or in thermal fusible polyethylene bags. A life-cycle assessment study of the natural selenizza compared with petroleum bitumen has shown that the environmental impact of the selenizza is about half the impact of the road asphalt produced in oil refineries in terms of carbon dioxide emission. Recycling Bitumen is a commonly recycled material in the construction industry. The two most common recycled materials that contain bitumen are reclaimed asphalt pavement (RAP) and reclaimed asphalt shingles (RAS). RAP is recycled at a greater rate than any other material in the United States, and typically contains approximately 5–6% bitumen binder. Asphalt shingles typically contain 20–40% bitumen binder. Bitumen naturally becomes stiffer over time due to oxidation, evaporation, exudation, and physical hardening. For this reason, recycled asphalt is typically combined with virgin asphalt, softening agents, and/or rejuvenating additives to restore its physical and chemical properties. Economics Although bitumen typically makes up only 4 to 5 percent (by weight) of the pavement mixture, as the pavement's binder, it is also the most expensive part of the cost of the road-paving material. During bitumen's early use in modern paving, oil refiners gave it away. However, bitumen is a highly traded commodity today. Its prices increased substantially in the early 21st Century. A U.S. government report states: "In 2002, asphalt sold for approximately $160 per ton. By the end of 2006, the cost had doubled to approximately $320 per ton, and then it almost doubled again in 2012 to approximately $610 per ton." The report indicates that an "average" 1-mile (1.6-kilometer)-long, four-lane highway would include "300 tons of asphalt," which, "in 2002 would have cost around $48,000. By 2006 this would have increased to $96,000 and by 2012 to $183,000... an increase of about $135,000 for every mile of highway in just 10 years." The Middle East is a significant exporter of bitumen, particularly to India and China. According to the Argus Bitumen Report (2024/07/12), India is the largest importer, driven by extensive infrastructure projects. The report projects a CAGR of 4.5% for India's bitumen imports over the next five years, while China's imports are expected to grow at a CAGR of 3.8%. The current export price to India is approximately $350 per metric ton, and for China, it is around $360 per metric ton. The Middle East's strategic advantage in crude oil production underpins its capacity to meet these demands. Health and safety People can be exposed to bitumen in the workplace by breathing in fumes or skin absorption. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 5mg/m3 over a 15-minute period. Bitumen is a largely inert material that must be heated or diluted to a point where it becomes workable for the production of materials for paving, roofing, and other applications. In examining the potential health hazards associated with bitumen, the International Agency for Research on Cancer (IARC) determined that it is the application parameters, predominantly temperature, that affect occupational exposure and the potential bioavailable carcinogenic hazard/risk of the bitumen emissions. In particular, temperatures greater than 199°C (390°F), were shown to produce a greater exposure risk than when bitumen was heated to lower temperatures, such as those typically used in asphalt pavement mix production and placement. IARC has classified paving asphalt fumes as a Class 2B possible carcinogen, indicating inadequate evidence of carcinogenicity in humans. In 2020, scientists reported that bitumen currently is a significant and largely overlooked source of air pollution in urban areas, especially during hot and sunny periods. A bitumen-like substance found in the Himalayas and known as shilajit is sometimes used as an Ayurveda medicine, but is not in fact a tar, resin or bitumen. See also Asphalt plant Asphaltene Bioasphalt Bitumen-based fuel Bituminous coal Bituminous rocks Blacktop Cariphalte Duxit Macadam Oil sands Pitch drop experiment Pitch (resin) Road surface Tar Tarmac Sealcoat Stamped asphalt Notes References Sources . External links Pavement Interactive – Asphalt CSU Sacramento, The World Famous Asphalt Museum! National Institute for Occupational Safety and Health – Asphalt Fumes Scientific American, "Asphalt", 20 August 1881, pp.121 Amorphous solids Building materials Chemical mixtures IARC Group 2B carcinogens Pavements Petroleum products Road construction materials
Bitumen
[ "Physics", "Chemistry", "Engineering" ]
11,862
[ "Petroleum products", "Building engineering", "Unsolved problems in physics", "Architecture", "Construction", "Petroleum", "Materials", "Chemical mixtures", "nan", "Asphalt", "Amorphous solids", "Matter", "Building materials" ]
673
https://en.wikipedia.org/wiki/Atomic%20number
The atomic number or nuclear charge number (symbol Z) of a chemical element is the charge number of its atomic nucleus. For ordinary nuclei composed of protons and neutrons, this is equal to the proton number (np) or the number of protons found in the nucleus of every atom of that element. The atomic number can be used to uniquely identify ordinary chemical elements. In an ordinary uncharged atom, the atomic number is also equal to the number of electrons. For an ordinary atom which contains protons, neutrons and electrons, the sum of the atomic number Z and the neutron number N gives the atom's atomic mass number A. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of the nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in daltons (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A. Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century. The conventional symbol Z comes from the German word 'number', which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order was then approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physical characteristic of atoms, did the word (and its English equivalent atomic number) come into common use in this context. The rules above do not always apply to exotic atoms which contain short-lived elementary particles other than protons, neutrons and electrons. History In the 19th century, the term "atomic number" typically meant the number of atoms in a given volume. Modern chemists prefer to use the concept of molar concentration. In 1913, Antonius van den Broek proposed that the electric charge of an atomic nucleus, expressed as a multiplier of the elementary charge, was equal to the element's sequential position on the periodic table. Ernest Rutherford, in various articles in which he discussed van den Broek's idea, used the term "atomic number" to refer to an element's position on the periodic table. No writer before Rutherford is known to have used the term "atomic number" in this way, so it was probably he who established this definition. After Rutherford deduced the existence of the proton in 1920, "atomic number" customarily referred to the proton number of an atom. In 1921, the German Atomic Weight Commission based its new periodic table on the nuclear charge number and in 1923 the International Committee on Chemical Elements followed suit. The periodic table and a natural number for each element The periodic table of elements creates an ordering of the elements, and so they can be numbered in order. Dmitri Mendeleev arranged his first periodic tables (first published on March 6, 1869) in order of atomic weight ("Atomgewicht"). However, in consideration of the elements' observed chemical properties, he changed the order slightly and placed tellurium (atomic weight 127.6) ahead of iodine (atomic weight 126.9). This placement is consistent with the modern practice of ordering the elements by proton number, Z, but that number was not known or suspected at the time. A simple numbering based on atomic weight position was never entirely satisfactory. In addition to the case of iodine and tellurium, several other pairs of elements (such as argon and potassium, cobalt and nickel) were later shown to have nearly identical or reversed atomic weights, thus requiring their placement in the periodic table to be determined by their chemical properties. However the gradual identification of more and more chemically similar lanthanide elements, whose atomic number was not obvious, led to inconsistency and uncertainty in the periodic numbering of elements at least from lutetium (element 71) onward (hafnium was not known at this time). The Rutherford-Bohr model and van den Broek In 1911, Ernest Rutherford gave a model of the atom in which a central nucleus held most of the atom's mass and a positive charge which, in units of the electron's charge, was to be approximately equal to half of the atom's atomic weight, expressed in numbers of hydrogen atoms. This central charge would thus be approximately half the atomic weight (though it was almost 25% different from the atomic number of gold , ), the single element from which Rutherford made his guess). Nevertheless, in spite of Rutherford's estimation that gold had a central charge of about 100 (but was element on the periodic table), a month after Rutherford's paper appeared, Antonius van den Broek first formally suggested that the central charge and number of electrons in an atom were exactly equal to its place in the periodic table (also known as element number, atomic number, and symbolized Z). This eventually proved to be the case. Moseley's 1913 experiment The experimental position improved dramatically after research by Henry Moseley in 1913. Moseley, after discussions with Bohr who was at the same lab (and who had used Van den Broek's hypothesis in his Bohr model of the atom), decided to test Van den Broek's and Bohr's hypothesis directly, by seeing if spectral lines emitted from excited atoms fitted the Bohr theory's postulation that the frequency of the spectral lines be proportional to the square of Z. To do this, Moseley measured the wavelengths of the innermost photon transitions (K and L lines) produced by the elements from aluminium (Z = 13) to gold (Z = 79) used as a series of movable anodic targets inside an x-ray tube. The square root of the frequency of these photons increased from one target to the next in an arithmetic progression. This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated electric charge of the nucleus, i.e. the element number Z. Among other things, Moseley demonstrated that the lanthanide series (from lanthanum to lutetium inclusive) must have 15 members—no fewer and no more—which was far from obvious from known chemistry at that time. Missing elements After Moseley's death in 1915, the atomic numbers of all known elements from hydrogen to uranium (Z = 92) were examined by his method. There were seven elements (with Z < 92) which were not found and therefore identified as still undiscovered, corresponding to atomic numbers 43, 61, 72, 75, 85, 87 and 91. From 1918 to 1947, all seven of these missing elements were discovered. By this time, the first four transuranium elements had also been discovered, so that the periodic table was complete with no gaps as far as curium (Z = 96). The proton and the idea of nuclear electrons In 1915, the reason for nuclear charge being quantized in units of Z, which were now recognized to be the same as the element number, was not understood. An old idea called Prout's hypothesis had postulated that the elements were all made of residues (or "protyles") of the lightest element hydrogen, which in the Bohr-Rutherford model had a single electron and a nuclear charge of one. However, as early as 1907, Rutherford and Thomas Royds had shown that alpha particles, which had a charge of +2, were the nuclei of helium atoms, which had a mass four times that of hydrogen, not two times. If Prout's hypothesis were true, something had to be neutralizing some of the charge of the hydrogen nuclei present in the nuclei of heavier atoms. In 1917, Rutherford succeeded in generating hydrogen nuclei from a nuclear reaction between alpha particles and nitrogen gas, and believed he had proven Prout's law. He called the new heavy nuclear particles protons in 1920 (alternate names being proutons and protyles). It had been immediately apparent from the work of Moseley that the nuclei of heavy atoms have more than twice as much mass as would be expected from their being made of hydrogen nuclei, and thus there was required a hypothesis for the neutralization of the extra protons presumed present in all heavy nuclei. A helium nucleus was presumed to have four protons plus two "nuclear electrons" (electrons bound inside the nucleus) to cancel two charges. At the other end of the periodic table, a nucleus of gold with a mass 197 times that of hydrogen was thought to contain 118 nuclear electrons in the nucleus to give it a residual charge of +79, consistent with its atomic number. Discovery of the neutron makes Z the proton number All consideration of nuclear electrons ended with James Chadwick's discovery of the neutron in 1932. An atom of gold now was seen as containing 118 neutrons rather than 118 nuclear electrons, and its positive nuclear charge now was realized to come entirely from a content of 79 protons. Since Moseley had previously shown that the atomic number Z of an element equals this positive charge, it was now clear that Z is identical to the number of protons of its nuclei. Chemical properties Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is Z (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of any mixture of atoms with a given atomic number. New elements The quest for new elements is usually described using atomic numbers. As of , all elements with atomic numbers 1 to 118 have been observed. Synthesis of new elements is accomplished by bombarding target atoms of heavy elements with ions, such that the sum of the atomic numbers of the target and ion elements equals the atomic number of the element being created. In general, the half-life of a nuclide becomes shorter as atomic number increases, though undiscovered nuclides with certain "magic" numbers of protons and neutrons may have relatively longer half-lives and comprise an island of stability. A hypothetical element composed only of neutrons, neutronium, has also been proposed and would have atomic number 0, but has never been observed. See also References Chemical properties Nuclear physics Atoms Dimensionless numbers of chemistry Numbers
Atomic number
[ "Physics", "Chemistry", "Mathematics" ]
2,311
[ "Quantity", "Chemical quantities", "Mathematical objects", "Numbers", "Arithmetic", "nan", "Nuclear physics", "Atoms", "Dimensionless numbers of chemistry", "Matter" ]
682
https://en.wikipedia.org/wiki/Adobe
Adobe ( ; ) is a building material made from earth and organic materials. is Spanish for mudbrick. In some English-speaking regions of Spanish heritage, such as the Southwestern United States, the term is used to refer to any kind of earthen construction, or various architectural styles like Pueblo Revival or Territorial Revival. Most adobe buildings are similar in appearance to cob and rammed earth buildings. Adobe is among the earliest building materials, and is used throughout the world. Adobe architecture has been dated to before 5,100 BP. Description Adobe bricks are rectangular prisms small enough that they can quickly air dry individually without cracking. They can be subsequently assembled, with the application of adobe mud to bond the individual bricks into a structure. There is no standard size, with substantial variations over the years and in different regions. In some areas a popular size measured weighing about ; in other contexts the size is weighing about . The maximum sizes can reach up to ; above this weight it becomes difficult to move the pieces, and it is preferred to ram the mud in situ, resulting in a different typology known as rammed earth. Strength In dry climates, adobe structures are extremely durable, and account for some of the oldest existing buildings in the world. Adobe buildings offer significant advantages due to their greater thermal mass, but they are known to be particularly susceptible to earthquake damage if they are not reinforced. Cases where adobe structures were widely damaged during earthquakes include the 1976 Guatemala earthquake, the 2003 Bam earthquake, and the 2010 Chile earthquake. Distribution Buildings made of sun-dried earth are common throughout the world (Middle East, Western Asia, North Africa, West Africa, South America, Southwestern North America, Southwestern and Eastern Europe.). Adobe had been in use by indigenous peoples of the Americas in the Southwestern United States, Mesoamerica, and the Andes for several thousand years. Puebloan peoples built their adobe structures with handsful or basketsful of adobe, until the Spanish introduced them to making bricks. Adobe bricks were used in Spain from the Late Bronze and Iron Ages (eighth century BCE onwards). Its wide use can be attributed to its simplicity of design and manufacture, and economics. Etymology The word adobe has existed for around 4,000 years with relatively little change in either pronunciation or meaning. The word can be traced from the Middle Egyptian () word ḏbt "mud brick" (with vowels unwritten). Middle Egyptian evolved into Late Egyptian and finally to Coptic (), where it appeared as ⲧⲱⲃⲉ tōbə. This was adopted into Arabic as aṭ-ṭawbu or aṭ-ṭūbu, with the definite article al- attached to the root tuba. This was assimilated into the Old Spanish language as adobe , probably via Mozarabic. English borrowed the word from Spanish in the early 18th century, still referring to mudbrick construction. In more modern English usage, the term adobe has come to include a style of architecture popular in the desert climates of North America, especially in New Mexico, regardless of the construction method. Composition An adobe brick is a composite material made of earth mixed with water and an organic material such as straw or dung. The soil composition typically contains sand, silt and clay. Straw is useful in binding the brick together and allowing the brick to dry evenly, thereby preventing cracking due to uneven shrinkage rates through the brick. Dung offers the same advantage. The most desirable soil texture for producing the mud of adobe is 15% clay, 10–30% silt, and 55–75% fine sand. Another source quotes 15–25% clay and the remainder sand and coarser particles up to cobbles , with no deleterious effect. Modern adobe is stabilized with either emulsified asphalt or Portland cement up to 10% by weight. No more than half the clay content should be expansive clays, with the remainder non-expansive illite or kaolinite. Too much expansive clay results in uneven drying through the brick, resulting in cracking, while too much kaolinite will make a weak brick. Typically the soils of the Southwest United States, where such construction has been widely used, are an adequate composition. Material properties Adobe walls are load bearing, i.e. they carry their own weight into the foundation rather than by another structure, hence the adobe must have sufficient compressive strength. In the United States, most building codes call for a minimum compressive strength of for the adobe block. Adobe construction should be designed so as to avoid lateral structural loads that would cause bending loads. The building codes require the building sustain a lateral acceleration earthquake load. Such an acceleration will cause lateral loads on the walls, resulting in shear and bending and inducing tensile stresses. To withstand such loads, the codes typically call for a tensile modulus of rupture strength of at least for the finished block. In addition to being an inexpensive material with a small resource cost, adobe can serve as a significant heat reservoir due to the thermal properties inherent in the massive walls typical in adobe construction. In climates typified by hot days and cool nights, the high thermal mass of adobe mediates the high and low temperatures of the day, moderating the temperature of the living space. The massive walls require a large and relatively long input of heat from the sun (radiation) and from the surrounding air (convection) before they warm through to the interior. After the sun sets and the temperature drops, the warm wall will continue to transfer heat to the interior for several hours due to the time-lag effect. Thus, a well-planned adobe wall of the appropriate thickness is very effective at controlling inside temperature through the wide daily fluctuations typical of desert climates, a factor which has contributed to its longevity as a building material. Thermodynamic material properties have significant variation in the literature. Some experiments suggest that the standard consideration of conductivity is not adequate for this material, as its main thermodynamic property is inertia, and conclude that experimental tests should be performed over a longer period of time than usual – preferably with changing thermal jumps. There is an effective R-value for a north facing wall of R0=10 hr ft2 °F/Btu, which corresponds to thermal conductivity k=10 in x 1 ft/12 in /R0=0.33 Btu/(hr ft °F) or 0.57 W/(m K) in agreement with the thermal conductivity reported from another source. To determine the total R-value of a wall, scale R0 by the thickness of the wall in inches. The thermal resistance of adobe is also stated as an R-value for a wall R0=4.1 hr ft2 °F/Btu. Another source provides the following properties: conductivity 0.30 Btu/(hr ft °F) or 0.52 W/(m K); specific heat capacity 0.24 Btu/(lb °F) or 1 kJ/(kg K) and density , giving heat capacity 25.4 Btu/(ft3 °F) or 1700 kJ/(m3 K). Using the average value of the thermal conductivity as k = 32 Btu/(hr ft °F) or 0.55 W/(m K), the thermal diffusivity is calculated to be . Uses Poured and puddled adobe walls Poured and puddled adobe (puddled clay, piled earth), today called cob, is made by placing soft adobe in layers, rather than by making individual dried bricks or using a form. "Puddle" is a general term for a clay or clay and sand-based material worked into a dense, plastic state. These are the oldest methods of building with adobe in the Americas until holes in the ground were used as forms, and later wooden forms used to make individual bricks were introduced by the Spanish. Adobe bricks Bricks made from adobe are usually made by pressing the mud mixture into an open timber frame. In North America, the brick is typically about in size. The mixture is molded into the frame, which is removed after initial setting. After drying for a few hours, the bricks are turned on edge to finish drying. Slow drying in shade reduces cracking. The same mixture, without straw, is used to make mortar and often plaster on interior and exterior walls. Some cultures used lime-based cement for the plaster to protect against rain damage. Depending on the form into which the mixture is pressed, adobe can encompass nearly any shape or size, provided drying is even and the mixture includes reinforcement for larger bricks. Reinforcement can include manure, straw, cement, rebar, or wooden posts. Straw, cement, or manure added to a standard adobe mixture can produce a stronger, more crack-resistant brick. A test is done on the soil content first. To do so, a sample of the soil is mixed into a clear container with some water, creating an almost completely saturated liquid. The container is shaken vigorously for one minute. It is then allowed to settle for a day until the soil has settled into layers. Heavier particles settle out first, sand above, silt above that, and very fine clay and organic matter will stay in suspension for days. After the water has cleared, percentages of the various particles can be determined. Fifty to 60 percent sand and 35 to 40 percent clay will yield strong bricks. The Cooperative State Research, Education, and Extension Service at New Mexico State University recommends a mix of not more than clay, not less than sand, and never more than silt. During the Great Depression, designer and builder Hugh W. Comstock used cheaper materials and made a specialized adobe brick called "Bitudobe." His first adobe house was built in 1936. In 1948, he published the book Post-Adobe; Simplified Adobe Construction Combining A Rugged Timber Frame And Modern Stabilized Adobe, which described his method of construction, including how to make "Bitudobe." In 1938, he served as an adviser to the architects Franklin & Kump Associates, who built the Carmel High School, which used his Post-adobe system. Adobe wall construction The ground supporting an adobe structure should be compressed, as the weight of adobe wall is significant and foundation settling may cause cracking of the wall. Footing depth is to be below the ground frost level. The footing and stem wall are commonly thick, respectively. Modern construction codes call for the use of reinforcing steel in the footing and stem wall. Adobe bricks are laid by course. Adobe walls usually never rise above two stories as they are load bearing and adobe has low structural strength. When creating window and door openings, a lintel is placed on top of the opening to support the bricks above. Atop the last courses of brick, bond beams made of heavy wood beams or modern reinforced concrete are laid to provide a horizontal bearing plate for the roof beams and to redistribute lateral earthquake loads to shear walls more able to carry the forces. To protect the interior and exterior adobe walls, finishes such as mud plaster, whitewash or stucco can be applied. These protect the adobe wall from water damage, but need to be reapplied periodically. Alternatively, the walls can be finished with other nontraditional plasters that provide longer protection. Bricks made with stabilized adobe generally do not need protection of plasters. Adobe roof The traditional adobe roof has been constructed using a mixture of soil/clay, water, sand and organic materials. The mixture was then formed and pressed into wood forms, producing rows of dried earth bricks that would then be laid across a support structure of wood and plastered into place with more adobe. Depending on the materials available, a roof may be assembled using wood or metal beams to create a framework to begin layering adobe bricks. Depending on the thickness of the adobe bricks, the framework has been preformed using a steel framing and a layering of a metal fencing or wiring over the framework to allow an even load as masses of adobe are spread across the metal fencing like cob and allowed to air dry accordingly. This method was demonstrated with an adobe blend heavily impregnated with cement to allow even drying and prevent cracking. The more traditional flat adobe roofs are functional only in dry climates that are not exposed to snow loads. The heaviest wooden beams, called vigas, lie atop the wall. Across the vigas lie smaller members called latillas and upon those brush is then laid. Finally, the adobe layer is applied. To construct a flat adobe roof, beams of wood were laid to span the building, the ends of which were attached to the tops of the walls. Once the vigas, latillas and brush are laid, adobe bricks are placed. An adobe roof is often laid with bricks slightly larger in width to ensure a greater expanse is covered when placing the bricks onto the roof. Following each individual brick should be a layer of adobe mortar, recommended to be at least thick to make certain there is ample strength between the brick's edges and also to provide a relative moisture barrier during rain. Roof design evolved around 1850 in the American Southwest. of adobe mud was applied on top of the latillas, then of dry adobe dirt applied to the roof. The dirt was contoured into a low slope to a downspout aka a 'canal'. When moisture was applied to the roof the clay particles expanded to create a waterproof membrane. Once a year it was necessary to pull the weeds from the roof and re-slope the dirt as needed. Depending on the materials, adobe roofs can be inherently fire-proof. The construction of a chimney can greatly influence the construction of the roof supports, creating an extra need for care in choosing the materials. The builders can make an adobe chimney by stacking simple adobe bricks in a similar fashion as the surrounding walls. In 1927, the Uniform Building Code (UBC) was adopted in the United States. Local ordinances, referencing the UBC added requirements to building with adobe. These included: restriction of building height of adobe structures to 1-story, requirements for adobe mix (compressive and shear strength) and new requirements which stated that every building shall be designed to withstand seismic activity, specifically lateral forces. By the 1980s however, seismic related changes in the California Building Code effectively ended solid wall adobe construction in California; however Post-and-Beam adobe and veneers are still being used. Adobe around the world The largest structure ever made from adobe is the Arg-é Bam built by the Achaemenid Empire. Other large adobe structures are the Huaca del Sol in Peru, with 100 million signed bricks and the ciudellas of Chan Chan and Tambo Colorado, both in Peru. See also used adobe walls (waterproofing plaster) (also known as Ctesiphon Arch) in Iraq is the largest mud brick arch in the world, built beginning in 540 AD References External links Soil-based building materials Masonry Adobe buildings and structures Appropriate technology Vernacular architecture Sustainable building Western (genre) staples and terminology
Adobe
[ "Engineering" ]
3,030
[ "Construction", "Sustainable building", "Masonry", "Building engineering" ]
713
https://en.wikipedia.org/wiki/Android%20%28robot%29
An android is a humanoid robot or other artificial being, often made from a flesh-like material. Historically, androids existed only in the domain of science fiction and were frequently seen in film and television, but advances in robot technology have allowed the design of functional and realistic humanoid robots. Terminology The Oxford English Dictionary traces the earliest use (as "Androides") to Ephraim Chambers' 1728 Cyclopaedia, in reference to an automaton that St. Albertus Magnus allegedly created. By the late 1700s, "androides", elaborate mechanical devices resembling humans performing human activities, were displayed in exhibit halls. The term "android" appears in US patents as early as 1863 in reference to miniature human-like toy automatons. The term android was used in a more modern sense by the French author Auguste Villiers de l'Isle-Adam in his work Tomorrow's Eve (1886), featuring an artificial humanoid robot named Hadaly. The term made an impact into English pulp science fiction starting from Jack Williamson's The Cometeers (1936) and the distinction between mechanical robots and fleshy androids was popularized by Edmond Hamilton's Captain Future stories (1940–1944). Although Karel Čapek's robots in R.U.R. (Rossum's Universal Robots) (1921)—the play that introduced the word robot to the world—were organic artificial humans, the word "robot" has come to primarily refer to mechanical humans, animals, and other beings. The term "android" can mean either one of these, while a cyborg ("cybernetic organism" or "bionic man") would be a creature that is a combination of organic and mechanical parts. The term "droid", popularized by George Lucas in the original Star Wars film and now used widely within science fiction, originated as an abridgment of "android", but has been used by Lucas and others to mean any robot, including distinctly non-human form machines like R2-D2. The word "android" was used in Star Trek: The Original Series episode "What Are Little Girls Made Of?" The abbreviation "andy", coined as a pejorative by writer Philip K. Dick in his novel Do Androids Dream of Electric Sheep?, has seen some further usage, such as within the TV series Total Recall 2070. While the term "android" is used in reference to human-looking robots in general (not necessarily male-looking humanoid robots), a robot with a female appearance can also be referred to as a gynoid. Besides one can refer to robots without alluding to their sexual appearance by calling them anthrobots (a portmanteau of anthrōpos and robot; see anthrobotics) or anthropoids (short for anthropoid robots; the term humanoids is not appropriate because it is already commonly used to refer to human-like organic species in the context of science fiction, futurism and speculative astrobiology). Authors have used the term android in more diverse ways than robot or cyborg. In some fictional works, the difference between a robot and android is only superficial, with androids being made to look like humans on the outside but with robot-like internal mechanics. In other stories, authors have used the word "android" to mean a wholly organic, yet artificial, creation. Other fictional depictions of androids fall somewhere in between. Eric G. Wilson, who defines an android as a "synthetic human being", distinguishes between three types of android, based on their body's composition: the mummy type – made of "dead things" or "stiff, inanimate, natural material", such as mummies, puppets, dolls and statues the golem type – made from flexible, possibly organic material, including golems and homunculi the automaton type – made from a mix of dead and living parts, including automatons and robots Although human morphology is not necessarily the ideal form for working robots, the fascination in developing robots that can mimic it can be found historically in the assimilation of two concepts: simulacra (devices that exhibit likeness) and automata (devices that have independence). Projects Several projects aiming to create androids that look, and, to a certain degree, speak or act like a human being have been launched or are underway. Japan Japanese robotics have been leading the field since the 1970s. Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the first android, a full-scale humanoid intelligent robot. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth. In 1984, WABOT-2 was revealed, and made a number of improvements. It was capable of playing the organ. Wabot-2 had ten fingers and two feet, and was able to read a score of music. It was also able to accompany a person. In 1986, Honda began its humanoid research and development program, to create humanoid robots capable of interacting successfully with humans. The Intelligent Robotics Lab, directed by Hiroshi Ishiguro at Osaka University, and the Kokoro company demonstrated the Actroid at Expo 2005 in Aichi Prefecture, Japan and released the Telenoid R1 in 2010. In 2006, Kokoro developed a new DER 2 android. The height of the human body part of DER2 is 165 cm. There are 47 mobile points. DER2 can not only change its expression but also move its hands and feet and twist its body. The "air servosystem" which Kokoro developed originally is used for the actuator. As a result of having an actuator controlled precisely with air pressure via a servosystem, the movement is very fluid and there is very little noise. DER2 realized a slimmer body than that of the former version by using a smaller cylinder. Outwardly DER2 has a more beautiful proportion. Compared to the previous model, DER2 has thinner arms and a wider repertoire of expressions. Once programmed, it is able to choreograph its motions and gestures with its voice. The Intelligent Mechatronics Lab, directed by Hiroshi Kobayashi at the Tokyo University of Science, has developed an android head called Saya, which was exhibited at Robodex 2002 in Yokohama, Japan. There are several other initiatives around the world involving humanoid research and development at this time, which will hopefully introduce a broader spectrum of realized technology in the near future. Now Saya is working at the Science University of Tokyo as a guide. The Waseda University (Japan) and NTT docomo's manufacturers have succeeded in creating a shape-shifting robot WD-2. It is capable of changing its face. At first, the creators decided the positions of the necessary points to express the outline, eyes, nose, and so on of a certain person. The robot expresses its face by moving all points to the decided positions, they say. The first version of the robot was first developed back in 2003. After that, a year later, they made a couple of major improvements to the design. The robot features an elastic mask made from the average head dummy. It uses a driving system with a 3DOF unit. The WD-2 robot can change its facial features by activating specific facial points on a mask, with each point possessing three degrees of freedom. This one has 17 facial points, for a total of 56 degrees of freedom. As for the materials they used, the WD-2's mask is fabricated with a highly elastic material called Septom, with bits of steel wool mixed in for added strength. Other technical features reveal a shaft driven behind the mask at the desired facial point, driven by a DC motor with a simple pulley and a slide screw. Apparently, the researchers can also modify the shape of the mask based on actual human faces. To "copy" a face, they need only a 3D scanner to determine the locations of an individual's 17 facial points. After that, they are then driven into position using a laptop and 56 motor control boards. In addition, the researchers also mention that the shifting robot can even display an individual's hair style and skin color if a photo of their face is projected onto the 3D Mask. Singapore Prof Nadia Thalmann, a Nanyang Technological University scientist, directed efforts of the Institute for Media Innovation along with the School of Computer Engineering in the development of a social robot, Nadine. Nadine is powered by software similar to Apple's Siri or Microsoft's Cortana. Nadine may become a personal assistant in offices and homes in future, or she may become a companion for the young and the elderly. Assoc Prof Gerald Seet from the School of Mechanical & Aerospace Engineering and the BeingThere Centre led a three-year R&D development in tele-presence robotics, creating EDGAR. A remote user can control EDGAR with the user's face and expressions displayed on the robot's face in real time. The robot also mimics their upper body movements. South Korea KITECH researched and developed EveR-1, an android interpersonal communications model capable of emulating human emotional expression via facial "musculature" and capable of rudimentary conversation, having a vocabulary of around 400 words. She is tall and weighs , matching the average figure of a Korean woman in her twenties. EveR-1's name derives from the Biblical Eve, plus the letter r for robot. EveR-1's advanced computing processing power enables speech recognition and vocal synthesis, at the same time processing lip synchronization and visual recognition by 90-degree micro-CCD cameras with face recognition technology. An independent microchip inside her artificial brain handles gesture expression, body coordination, and emotion expression. Her whole body is made of highly advanced synthetic jelly silicon and with 60 artificial joints in her face, neck, and lower body; she is able to demonstrate realistic facial expressions and sing while simultaneously dancing. In South Korea, the Ministry of Information and Communication had an ambitious plan to put a robot in every household by 2020. Several robot cities have been planned for the country: the first will be built in 2016 at a cost of 500 billion won (US$440 million), of which 50 billion is direct government investment. The new robot city will feature research and development centers for manufacturers and part suppliers, as well as exhibition halls and a stadium for robot competitions. The country's new Robotics Ethics Charter will establish ground rules and laws for human interaction with robots in the future, setting standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots to prevent human abuse of robots and vice versa. United States Walt Disney and a staff of Imagineers created Great Moments with Mr. Lincoln that debuted at the 1964 New York World's Fair. Dr. William Barry, an Education Futurist and former visiting West Point Professor of Philosophy and Ethical Reasoning at the United States Military Academy, created an AI android character named "Maria Bot". This Interface AI android was named after the infamous fictional robot Maria in the 1927 film Metropolis, as a well-behaved distant relative. Maria Bot is the first AI Android Teaching Assistant at the university level. Maria Bot has appeared as a keynote speaker as a duo with Barry for a TEDx talk in Everett, Washington in February 2020. Resembling a human from the shoulders up, Maria Bot is a virtual being android that has complex facial expressions and head movement and engages in conversation about a variety of subjects. She uses AI to process and synthesize information to make her own decisions on how to talk and engage. She collects data through conversations, direct data inputs such as books or articles, and through internet sources. Maria Bot was built by an international high-tech company for Barry to help improve education quality and eliminate education poverty. Maria Bot is designed to create new ways for students to engage and discuss ethical issues raised by the increasing presence of robots and artificial intelligence. Barry also uses Maria Bot to demonstrate that programming a robot with life-affirming, ethical framework makes them more likely to help humans to do the same. Maria Bot is an ambassador robot for good and ethical AI technology. Hanson Robotics, Inc., of Texas and KAIST produced an android portrait of Albert Einstein, using Hanson's facial android technology mounted on KAIST's life-size walking bipedal robot body. This Einstein android, also called "Albert Hubo", thus represents the first full-body walking android in history. Hanson Robotics, the FedEx Institute of Technology, and the University of Texas at Arlington also developed the android portrait of sci-fi author Philip K. Dick (creator of Do Androids Dream of Electric Sheep?, the basis for the film Blade Runner), with full conversational capabilities that incorporated thousands of pages of the author's works. In 2005, the PKD android won a first-place artificial intelligence award from AAAI. Use in fiction Androids are a staple of science fiction. Isaac Asimov pioneered the fictionalization of the science of robotics and artificial intelligence, notably in his 1950s series I, Robot. One thing common to most fictional androids is that the real-life technological challenges associated with creating thoroughly human-like robots — such as the creation of strong artificial intelligence—are assumed to have been solved. Fictional androids are often depicted as mentally and physically equal or superior to humans—moving, thinking and speaking as fluidly as them. The tension between the nonhuman substance and the human appearance—or even human ambitions—of androids is the dramatic impetus behind most of their fictional depictions. Some android heroes seek, like Pinocchio, to become human, as in the film Bicentennial Man, or Data in Star Trek: The Next Generation. Others, as in the film Westworld, rebel against abuse by careless humans. Android hunter Deckard in Do Androids Dream of Electric Sheep? and its film adaptation Blade Runner discovers that his targets appear to be, in some ways, more "human" than he is. The sequel Blade Runner 2049 involves android hunter K, himself an android, discovering the same thing. Android stories, therefore, are not essentially stories "about" androids; they are stories about the human condition and what it means to be human. One aspect of writing about the meaning of humanity is to use discrimination against androids as a mechanism for exploring racism in society, as in Blade Runner. Perhaps the clearest example of this is John Brunner's 1968 novel Into the Slave Nebula, where the blue-skinned android slaves are explicitly shown to be fully human. More recently, the androids Bishop and Annalee Call in the films Aliens and Alien Resurrection are used as vehicles for exploring how humans deal with the presence of an "Other". The 2018 video game Detroit: Become Human also explores how androids are treated as second class citizens in a near future society. Female androids, or "gynoids", are often seen in science fiction, and can be viewed as a continuation of the long tradition of men attempting to create the stereotypical "perfect woman". Examples include the Greek myth of Pygmalion and the female robot Maria in Fritz Lang's Metropolis. Some gynoids, like Pris in Blade Runner, are designed as sex-objects, with the intent of "pleasing men's violent sexual desires", or as submissive, servile companions, such as in The Stepford Wives. Fiction about gynoids has therefore been described as reinforcing "essentialist ideas of femininity", although others have suggested that the treatment of androids is a way of exploring racism and misogyny in society. The 2015 Japanese film Sayonara, starring Geminoid F, was promoted as "the first movie to feature an android performing opposite a human actor". See also References Further reading Kerman, Judith B. (1991). Retrofitting Blade Runner: Issues in Ridley Scott's Blade Runner and Philip K. Dick's Do Androids Dream of Electric Sheep? Bowling Green, OH: Bowling Green State University Popular Press. . Perkowitz, Sidney (2004). Digital People: From Bionic Humans to Androids. Joseph Henry Press. . Shelde, Per (1993). Androids, Humanoids, and Other Science Fiction Monsters: Science and Soul in Science Fiction Films. New York: New York University Press. . Ishiguro, Hiroshi. "Android science." Cognitive Science Society. 2005. Glaser, Horst Albert and Rossbach, Sabine: The Artificial Human, Frankfurt/M., Bern, New York 2011 "The Artificial Human" TechCast Article Series, Jason Rupinski and Richard Mix, "Public Attitudes to Androids: Robot Gender, Tasks, & Pricing" Carpenter, J. (2009). Why send the Terminator to do R2D2s job?: Designing androids as rhetorical phenomena. Proceedings of HCI 2009: Beyond Gray Droids: Domestic Robot Design for the 21st Century. Cambridge, UK. 1 September. Telotte, J.P. Replications: A Robotic History of the Science Fiction Film. University of Illinois Press, 1995. External links Japanese inventions South Korean inventions Osaka University research Science fiction themes Human–machine interaction Robots
Android (robot)
[ "Physics", "Technology", "Engineering", "Biology" ]
3,597
[ "Machines", "Behavior", "Robots", "Physical systems", "Android (robot)", "Human–machine interaction", "Design", "Human behavior" ]
896
https://en.wikipedia.org/wiki/Argon
Argon is a chemical element; it has symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third most abundant gas in Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust. Nearly all argon in Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas. The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990. Argon is extracted industrially by the fractional distillation of liquid air. It is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. It is also used in incandescent and fluorescent lighting, and other gas-discharge tubes. It makes a distinctive blue-green gas laser. It is also used in fluorescent glow starters. Characteristics Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature. Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized. History Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785. Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon. Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements. Prior to 1957, the symbol for argon was "A". This was changed to Ar after the International Union of Pure and Applied Chemistry published the work Nomenclature of Inorganic Chemistry in 1957. Occurrence Argon constitutes 0.934% by volume and 1.288% by mass of Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively. Isotopes The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating. In Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days. Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes. The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as . The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table). Compounds Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However, it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space. Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa. Production Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year. Applications Argon has several desirable properties: Argon is a chemically inert gas. Argon is the cheapest alternative when nitrogen is not sufficiently inert. Argon has low thermal conductivity. Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications. Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. It is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of its applications arise simply because it is inert and relatively cheap. Industrial processes Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium. Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life. Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam. Scientific research Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions. At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials. Preservative Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon. In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry. Argon is sometimes used as the propellant in aerosol cans. Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage. Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced. Laboratory equipment Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus. Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication. Medical use Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient. Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects. Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood. Lighting Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers. Miscellaneous uses Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity. Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure. Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating are used to date sedimentary, metamorphic, and igneous rocks. Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse. Safety Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling. See also Industrial gas Oxygen–argon ratio, a ratio of two physically similar gases, which has importance in various sectors. References Further reading On triple point pressure at 69 kPa. On triple point pressure at 83.8058 K. External links Argon at The Periodic Table of Videos (University of Nottingham) USGS Periodic Table – Argon Diving applications: Why Argon? Chemical elements E-number additives Noble gases Industrial gases
Argon
[ "Physics", "Chemistry", "Materials_science" ]
3,963
[ "Noble gases", "Chemical elements", "Nonmetals", "Industrial gases", "Chemical process engineering", "Atoms", "Matter" ]
897
https://en.wikipedia.org/wiki/Arsenic
Arsenic is a chemical element with the symbol As and the atomic number 33. It is a metalloid and one of the pnictogens, and therefore shares many properties with its group 15 neighbors phosphorus and antimony. Arsenic is a notoriously toxic heavy metal. It occurs naturally in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. It has various allotropes, but only the grey form, which has a metallic appearance, is important to industry. The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is also a common n-type dopant in semiconductor electronic devices, and a component of the III–V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds. Arsenic has been known since ancient times to be poisonous to humans. However, a few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic have been proposed to be an essential dietary element in rats, hamsters, goats, and chickens. Research has not been conducted to determine whether small amounts of arsenic may play a role in human metabolism. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world. The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic number 1 in its 2001 prioritized list of hazardous substances at Superfund sites. Arsenic is classified as a Group-A carcinogen. Characteristics Physical characteristics The three most common arsenic allotropes are grey, yellow, and black arsenic, with grey being the most common. Grey arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, grey arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Grey arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Grey arsenic is also the most stable form. Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into grey arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus. Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. Black arsenic is also a poor electrical conductor. Arsenic sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is at 3.63 MPa and . Isotopes Arsenic occurs in nature as one stable isotope, 75As, and is therefore called a monoisotopic element. As of 2024, at least 32 radioisotopes have also been synthesized, ranging in atomic mass from 64 to 95. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As (t1/2=65.30 hours), 72As (t1/2=26.0 hours), 74As (t1/2=17.77 days), 76As (t1/2=26.26 hours), and 77As (t1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds. Chemistry Arsenic has a similar electronegativity and ionization energies to its lighter pnictogen congener phosphorus and therefore readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the +5 oxidation state than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers. Compounds Compounds of arsenic resemble, in some respects, those of phosphorus, which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons. Inorganic compounds One of the simplest arsenic compounds is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen. Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and its salts, known as arsenates, are a major source of arsenic contamination of groundwater in regions with high levels of naturally-occurring arsenic minerals. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons. The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3. A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes. All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.) Alloys Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide. Organoarsenic compounds A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive, garlic-like odor; it is very toxic. Occurrence and production Arsenic is the 53rd most abundant element in the Earth's crust, comprising about 1.5 parts per million (0.00015%). Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater. Arsenic is the 22nd most abundant element in seawater and ranks 41st in abundance in the universe. Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment. In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust. On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture. History The word arsenic has its origin in the Syriac word zarnika, from Arabic al-zarnīḵ 'the orpiment', based on Persian zar ("gold") from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek (using folk etymology) as arsenikon () – a neuter form of the Greek adjective arsenikos (), meaning "male", "virile". Latin-speakers adopted the Greek term as , which in French ultimately became , whence the English word "arsenic". Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos () describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, the substance was frequently used for murder until the advent in the 1830s of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". Arsenic became known as "the inheritance powder" due to its use in killing family members in the Renaissance era. During the Bronze Age, arsenic was melted with copper to make arsenical bronze. Jabir ibn Hayyan described the isolation of arsenic before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rarely. Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt through the reaction of potassium acetate with arsenic trioxide. In the Victorian era, women would eat "arsenic" ("white arsenic" or arsenic trioxide) mixed with vinegar and chalk to improve the complexion of their faces, making their skin paler (to show they did not work in the fields). The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. From the late-18th century wallpaper production began to use dyes made from arsenic, which was thought to increase the pigment's brightness. One account of the illness and 1821 death of Napoleon I implicates arsenic poisoning involving wallpaper. Two arsenic pigments have been widely used since their discovery – Paris Green in 1814 and Scheele's Green in 1775. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942. In small doses, soluble arsenic compounds act as stimulants, and were once popular as medicine by people in the mid-18th to 19th centuries; this use was especially prevalent for sport animals such as race horses or work dogs and continued into the 20th century. A 2006 study of the remains of the Australian racehorse Phar Lap determined that its 1932 death was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system." Applications Agricultural The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations). Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out in the United States by 2013 in all agricultural activities except cotton farming. The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity. Arsenic is used as a feed additive in poultry and swine production, in particular it was used in the U.S. until 2015 to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. In 2011, Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continued to sell nitarsone until 2015, primarily for use in turkeys. Medical use During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler), for treating diseases such as cancer or psoriasis. Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis in spite of their severe toxicity, since the disease is almost uniformly fatal if untreated. In 2000 the US Food and Drug Administration approved arsenic trioxide for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid. A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations. Alloys The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light. Military After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice. Other uses Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets. Arsenic is used in bronzing. As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets. Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments. Arsenic is also used for taxonomic sample preservation. It was also used in embalming fluids historically. Arsenic was used in the taxidermy process up until the 1980s. Arsenic was used as an opacifier in ceramics, creating white glazes. Until recently, arsenic was used in optical glass. Modern glass manufacturers have ceased using both arsenic and lead. Biological role Bacteria Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr). In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain, PHS-1, has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues. In 2011, it was postulated that the Halomonadaceae strain GFAJ-1 could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups. Potential role in higher animals Arsenic may be an essential trace mineral in birds, involved in the synthesis of methionine metabolites. However, the role of arsenic in bird nutrition is disputed, as other authors state that arsenic is toxic in small amounts Some evidence indicates that arsenic is an essential trace mineral in mammals. Heredity Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility. The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation. Biomethylation Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 μg/day. Values about 1000 μg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic. Environmental issues Exposure Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts. During the Victorian era, arsenic was widely used in home decor, especially wallpapers. In Europe, an analysis based on 20,000 soil samples across all 28 countries show that 98% of sampled soils have concentrations less than 20 mg kg-1. In addition, the As hotspots are related to frequent fertilization and close distance to mining activities. Occurrence in drinking water Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand, in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a 2017 report in Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level. Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 μg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water. A study by IIT Kharagpur found high levels of Arsenic in groundwater of 20% of India's land, exposing more than 250 million people. States such as Punjab, Bihar, West Bengal, Assam, Haryana, Uttar Pradesh, and Gujarat have highest land area exposed to arsenic. In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 ppb drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits. Low-level exposure to arsenic at concentrations of 100 ppb (i.e., above the 10 ppb drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus. Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic. Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke. Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation. Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminium oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap. Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water. Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced. Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes. Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 μg/L. This may find applications in areas where the potable water is extracted from underground aquifers. San Pedro de Atacama For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity. Hazard maps for contaminated groundwater Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground. Redox transformation of arsenic in natural waters Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution. Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic. The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are , , , and at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, is predominant at pH 2–9. Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments. The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic. Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic. Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria. Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where sulfate reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1. Wood preservation in the US As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole. Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater. Mapping of industrial releases in the US One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources. Bioremediation Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered. Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination. Arsenic removal Coagulation and flocculation are closely related processes common in arsenate removal from water. Due to the net negative charge carried by arsenate ions, they settle slowly or not at all due to charge repulsion. In coagulation, a positively charged coagulent such as iron and aluminum (commonly used salts: FeCl3, Fe2(SO4)3, Al2(SO4)3) neutralize the negatively charged arsenate, enable it to settle. Flocculation follows where a flocculant bridges smaller particles and allows the aggregate to precipitate out from water. However, such methods may not be efficient on arsenite as As(III) exists in uncharged arsenious acid, H3AsO3, at near-neutral pH. The major drawbacks of coagulation and flocculation are the costly disposal of arsenate-concentrated sludge, and possible secondary contamination of environment. Moreover, coagulents such as iron may produce ion contamination that exceeds safety levels. Toxicity and precautions Arsenic and many of its compounds are especially potent poisons (e.g. arsine). Small amount of arsenic can be detected by pharmacopoial methods which includes reduction of arsenic to arsenious with help of zinc and can be confirmed with mercuric chloride paper. Classification Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC. The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens. Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [; As(III)]". Legal limits, food, and drink In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb). In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, the FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard. Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram, or 1000 ppb). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic. In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior. Consumer Reports recommended: That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production; That the FDA establish a legal limit for food; That industry change production practices to lower arsenic levels, especially in food for children; and That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content). Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice. A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice. Reducing arsenic content in rice In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption. Occupational exposure limits Ecotoxicity Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils. Toxicity in animals Biological mechanism Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes. Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur. Exposure risks and remediation Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry. The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic. Treatment Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity. See also Aqua Tofana Arsenic and Old Lace Grainger challenge Hypothetical types of biochemistry References Bibliography Further reading External links WHO fact sheet on arsenic Arsenic Cancer Causing Substances, U.S. National Cancer Institute. CTD's Arsenic page and CTD's Arsenicals page from the Comparative Toxicogenomics Database Contaminant Focus: Arsenic by the EPA. Environmental Health Criteria for Arsenic and Arsenic Compounds, 2001 by the WHO. National Institute for Occupational Safety and Health – Arsenic Page Chemical elements Metalloids Semimetals Hepatotoxins Pnictogens Endocrine disruptors IARC Group 1 carcinogens Trigonal minerals Minerals in space group 166 Teratogens Fetotoxicants Suspected testicular toxicants Native element minerals Chemical elements with rhombohedral structure
Arsenic
[ "Physics", "Chemistry", "Materials_science" ]
10,952
[ "Matter", "Chemical elements", "Endocrine disruptors", "Materials", "Condensed matter physics", "Teratogens", "Atoms", "Semimetals" ]
900
https://en.wikipedia.org/wiki/Americium
Americium is a synthetic chemical element; it has symbol Am and atomic number 95. It is radioactive and a transuranic member of the actinide series in the periodic table, located under the lanthanide element europium and was thus named after the Americas by analogy. Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer. Americium is a relatively soft radioactive metal with a silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattices of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples. History Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series." The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness). Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years. The times are half-lives The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h. The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C. Occurrence The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of neutron capture and beta decay (238U → 239Pu → 240Pu → 241Am), though the quantities would be tiny and this has not been confirmed. Extraterrestrial long-lived 247Cm is probably also deposited on Earth and has 243Am as one of its intermediate decay products, but again this has not been confirmed. Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland. In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries per gram (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils. Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Americium is also one of the elements that have theoretically been detected in Przybylski's Star. Synthesis and extraction Isotope nucleosynthesis Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order . Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am: ^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it beta-decays to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years. The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm: Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux: ^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am Metal generation Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away. Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten. An alternative is the reduction of americium dioxide by metallic lanthanum or thorium: Physical properties In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C). At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium. As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 μOhm·cm to 10 μOhm·cm after 40 hours, and saturates at about 16 μOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 μOhm·cm at liquid helium to 69 μOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium. Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is . Chemical properties Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states +2, +4, +5, +6 and +7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), ; (yellow), (brown) and (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm. Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state. The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like and are comparable to uranates and the ion is comparable to the uranyl ion, . Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate. Chemical compounds Oxygen compounds Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure. The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L. Halides Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions. Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are: Orthorhombic AmCl2: a = , b = and c = Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I: {Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg} Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions: Am^3+ + 3F^- -> AmF3(v) The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine: 2AmF3 + F2 -> 2AmF4 Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles. Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium(III) chloride hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure. Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis: AmCl3 + H2O -> AmOCl + 2HCl Chalcogenides and pnictides The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice. Silicides and borides Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group I41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere. Organoamericium compounds Analogous to uranocene, americium is predicted to form the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3. Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides. Biological aspects Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus Citrobacter precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi. In the laboratory, both americium and curium were found to support the growth of methylotrophs. Fission The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of the two readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors. There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals. Isotopes About 18 isotopes and 11 nuclear isomers are known for americium, having mass numbers 229, 230, and 232 through 247. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass. Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV. Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U. Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U. Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle. Applications Ionization-type smoke detector Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation. The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms. Radionuclide As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes. Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator. One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer. In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function. Neutron source The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction: ^{241}_{95}Am -> ^{237}_{93}Np + ^{4}_{2}He + \gamma ^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations. Production of other elements Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm: ^{243}_{95}Am ->[\ce{(n,\gamma)}] ^{244}_{95}Am ->[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O. Spectrometer Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete. Health concerns As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth. If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity. Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease. See also Actinides in the environment :Category:Americium compounds Notes References Bibliography Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960 Further reading Nuclides and Isotopes – 14th Edition, GE Nuclear Energy, 1989. External links Americium at The Periodic Table of Videos (University of Nottingham) ATSDR – Public Health Statement: Americium World Nuclear Association – Smoke Detectors and Americium Chemical elements Chemical elements with double hexagonal close-packed structure Actinides Carcinogens Synthetic elements
Americium
[ "Physics", "Chemistry", "Environmental_science" ]
8,399
[ "Matter", "Toxicology", "Chemical elements", "Synthetic materials", "Synthetic elements", "Carcinogens", "Atoms", "Radioactivity" ]
902
https://en.wikipedia.org/wiki/Atom
Atoms are the basic particles of the chemical elements. An atom consists of a nucleus of protons and generally neutrons, surrounded by an electromagnetically bound swarm of electrons. The chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. Atoms with the same number of protons but a different number of neutrons are called isotopes of the same element. Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. Atoms are smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. They are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects. More than 99.9994% of an atom's mass is in the nucleus. Protons have a positive electric charge and neutrons have no charge, so the nucleus is positively charged. The electrons are negatively charged, and this opposing charge is what binds them to the nucleus. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral as a whole. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation). The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to attach and detach from each other is responsible for most of the physical changes observed in nature. Chemistry is the science that studies these changes. History of atomic theory In philosophy The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". But this ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton found evidence that matter really is composed of discrete units, and so applied the word atom to those units. Dalton's law of multiple proportions In the early 1800s, John Dalton compiled experimental data gathered by him and other scientists and discovered a pattern now known as the "law of multiple proportions". He noticed that in any group of chemical compounds which all contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This pattern suggested that each element combines with other elements in multiples of a basic unit of weight, with each element having a unit of unique weight. Dalton decided to call these units "atoms". For example, there are two types of tin oxide: one is a grey powder that is 88.1% tin and 11.9% oxygen, and the other is a white powder that is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. Dalton concluded that in the grey oxide there is one atom of oxygen for every atom of tin, and in the white oxide there are two atoms of oxygen for every atom of tin (SnO and SnO2). Dalton also analyzed iron oxides. There is one type of iron oxide that is a black powder which is 78.1% iron and 21.9% oxygen; and there is another iron oxide that is a red powder which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. Dalton concluded that in these oxides, for every two atoms of iron, there are two or three atoms of oxygen respectively (Fe2O2 and Fe2O3). As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2. Discovery of the electron In 1897, J. J. Thomson discovered that cathode rays can be deflected by electric and magnetic fields, which meant that cathode rays are not a form of light but made of electrically charged particles, and their charge was negative given the direction the particles were deflected in. He measured these particles to be 1,700 times lighter than hydrogen (the lightest atom). He called these new particles corpuscles but they were later renamed electrons since these are the particles that carry electricity. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. Thomson explained that an electric current is the passing of electrons from one atom to the next, and when there was no current the electrons embedded themselves in the atoms. This in turn meant that atoms were not indivisible as scientists thought. The atom was composed of electrons whose negative charge was balanced out by some source of positive charge to create an electrically neutral atom. Ions, Thomson explained, must be atoms which have an excess or shortage of electrons. Discovery of the nucleus The electrons in the atom logically had to be balanced out by a commensurate amount of positive charge, but Thomson had no idea where this positive charge came from, so he tentatively proposed that it was everywhere in the atom, the atom being in the shape of a sphere. This was the mathematically simplest hypothesis to fit the available evidence, or lack thereof. Following from this, Thomson imagined that the balance of electrostatic forces would distribute the electrons throughout the sphere in a more or less even manner. Thomson's model is popularly known as the plum pudding model, though neither Thomson nor his colleagues used this analogy. Thomson's model was incomplete, it was unable to predict any other properties of the elements such as emission spectra and valencies. It was soon rendered obsolete by the discovery of the atomic nucleus. Between 1908 and 1913, Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They did this to measure the scattering patterns of the alpha particles. They spotted a small number of alpha particles being deflected by angles greater than 90°. This shouldn't have been possible according to the Thomson model of the atom, whose charges were too diffuse to produce a sufficiently strong electric field. The deflections should have all been negligible. Rutherford proposed that the positive charge of the atom is concentrated in a tiny volume at the center of the atom and that the electrons surround this nucleus in a diffuse cloud. This nucleus carried almost all of the atom's mass, the electrons being so very light. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field that could deflect the alpha particles so strongly. Bohr model A problem in classical mechanics is that an accelerating charged particle radiates electromagnetic radiation, causing the particle to lose kinetic energy. Circular motion counts as acceleration, which means that an electron orbiting a central charge should spiral down into that nucleus as it loses speed. In 1913, the physicist Niels Bohr proposed a new model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable and why elements absorb and emit electromagnetic radiation in discrete spectra. Bohr's model could only predict the emission spectra of hydrogen, not atoms with more than one electron. Discovery of protons and neutrons Back in 1815, William Prout observed that the atomic weights of many elements were multiples of hydrogen's atomic weight, which is in fact true for all of them if one takes isotopes into account. In 1898, J. J. Thomson found that the positive charge of a hydrogen ion is equal to the negative charge of an electron, and these were then the smallest known charged particles. Thomson later found that the positive charge in an atom is a positive multiple of an electron's negative charge. In 1913, Henry Moseley discovered that the frequencies of X-ray emissions from an excited atom were a mathematical function of its atomic number and hydrogen's nuclear charge. In 1919 Rutherford bombarded nitrogen gas with alpha particles and detected hydrogen ions being emitted from the gas, and concluded that they were produced by alpha particles hitting and splitting the nuclei of the nitrogen atoms. These observations led Rutherford to conclude that the hydrogen nucleus is a singular particle with a positive charge equal to the electron's negative charge. He named this particle "proton" in 1920. The number of protons in an atom (which Rutherford called the "atomic number") was found to be equal to the element's ordinal number on the periodic table and therefore provided a simple and clear-cut way of distinguishing the elements from each other. The atomic weight of each element is higher than its proton number, so Rutherford hypothesized that the surplus weight was carried by unknown particles with no electric charge and a mass equal to that of the proton. In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick now claimed these particles as Rutherford's neutrons. The current consensus model In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed that all particles behave like waves to some extent, and in 1926 Erwin Schroedinger used this idea to develop the Schroedinger equation, which describes electrons as three-dimensional waveforms rather than points in space. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be found. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Structure Subatomic particles Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is the least massive of these particles by four orders of magnitude at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass of . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a mass of . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. Nucleus All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to  femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, E=mc2, where m is the mass loss and c is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon begins to decrease. That means that a fusion process producing a nucleus that has an atomic number higher than about 26, and a mass number higher than about 60, is an endothermic process. Thus, more massive nuclei cannot undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. Electron cloud The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. Properties Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 251 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 161 (bringing the total to 251) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 35 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the Solar System. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14). For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.1 stable isotopes per element. Twenty-six "monoisotopic elements" have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes. Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 251 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10, and nitrogen-14. (Tantalum-180m is odd-odd and observationally stable, but is predicted to decay with a very long half-life.) Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138, and lutetium-176. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Mass The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of . As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Shape and size Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm. When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (1022) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple. Radioactive decay Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm. The most common forms of radioactive decay are: Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number. Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron. Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay. Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth. Magnetic moment Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin  ħ, or "spin-". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin. The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons. In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field. The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging. Energy levels The potential energy of an electron in an atom is negative relative to when the distance from the nucleus goes to infinity; its dependence on the electron's position reaches the minimum inside the nucleus, roughly in inverse proportion to the distance. In the quantum-mechanical model, a bound electron can occupy only a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for a theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy increases along with n because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by the electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors. When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined. Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect. If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band. Valence and bonding behavior Valency is the combining power of an element. It is determined by the number of bonds it can form to other atoms or groups. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds. The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases. States Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior. Identification While atoms are too small to be seen, devices such as the scanning tunneling microscope (STM) enable their visualization at the surfaces of solids. The microscope uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two biased electrodes, providing a tunneling current that is exponentially dependent on their separation. One electrode is a sharp tip ideally ending with a single atom. At each point of the scan of the surface the tip's height is adjusted so as to keep the tunneling current at a set value. How much the tip moves to and away from the surface is interpreted as the height profile. For low bias, the microscope images the averaged electron orbitals across closely packed energy levels—the local density of the electronic states near the Fermi level. Because of the distances involved, both electrodes need to be extremely stable; only then periodicities can be observed that correspond to individual atoms. The method alone is not chemically specific, and cannot identify the atomic species present at the surface. Atoms can be easily identified by their mass. If an atom is ionized by removing one of its electrons, its trajectory when it passes through a magnetic field will bend. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry. Electron emission techniques such as X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES), which measure the binding energies of the core electrons, are used to identify the atomic species present in a sample in a non-destructive way. With proper focusing both can be made area-specific. Another such method is electron energy loss spectroscopy (EELS), which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth. Origin and current state Baryonic matter forms about 4% of the total energy density of the observable universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates all electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible. Formation Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron. Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei. Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple-alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements. Earth Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay. There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore. The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter. Rare and theoretical forms Superheavy elements All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110 to 114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with Z > 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects. Exotic matter Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996, the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva. Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test fundamental predictions of physics. See also Notes References Bibliography Further reading External links Atoms in Motion – The Feynman Lectures on Physics Chemistry Articles containing video clips
Atom
[ "Physics" ]
10,285
[ "Atoms", "Matter" ]
1,200
https://en.wikipedia.org/wiki/Atomic%20physics
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified. Isolated atoms Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms. Electronic configuration Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons). Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization. If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved. If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic X-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon. There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however, there are no such rules for excitation by collision processes. Bohr Model of the Atom The Bohr model, proposed by Niels Bohr in 1913, is a revolutionary theory describing the structure of the hydrogen atom. It introduced the idea of quantized orbits for electrons, combining classical and quantum physics. Key Postulates of the Bohr Model 1.Electrons Move in Circular Orbits: • Electrons revolve around the nucleus in fixed, circular paths called orbits or energy levels. •These orbits are stable and do not radiate energy. 2.Quantization of Angular Momentum: •The angular momentum of an electron is quantized and given by: L = m_e v r = n\hbar, \quad n = 1, 2, 3, \dots where: • m_e : Mass of the electron. • v : Velocity of the electron. • r : Radius of the orbit. • \hbar : Reduced Planck’s constant ( \hbar = \frac{h}{2\pi} ). •n : Principal quantum number, representing the orbit. 3.Energy Levels: •Each orbit has a specific energy. The total energy of an electron in the  nth orbit is: E_n = -\frac{13.6}{n^2} \ \text{eV}, where  13.6 \ \text{eV}  is the ground-state energy of the hydrogen atom. 4.Emission or Absorption of Energy: •Electrons can transition between orbits by absorbing or emitting energy equal to the difference between the energy levels: \Delta E = E_f - E_i = h\nu, where: •h : Planck’s constant. • \nu : Frequency of emitted/absorbed radiation. • E_f, E_i : Final and initial energy levels. History and developments One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or written by . This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were, although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward. The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy. Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work. Beyond the well-known phenomena wich can be describe with regular quantum mechanics chaotic processes can occour which need different descriptions. Significant atomic physicists See also Particle physics Isomeric shift Atomism Ionisation Quantum Mechanics Electron Correlation Quantum Chemistry Bound State Bibliography Sommerfeld, A. (1923) Atomic structure and spectral lines. (translated from German "Atombau und Spektrallinien" 1921), Dutton Publisher. Smirnov, B.E. (2003) Physics of Atoms and Ions, Springer. ISBN 0-387-95550-X. Szász, L. (1992) The Electronic Structure of Atoms, John Willey & Sons. ISBN 0-471-54280-6. Bethe, H.A. & Salpeter E.E. (1957) Quantum Mechanics of One- and Two Electron Atoms. Springer. Born, M. (1937) Atomic Physics. Blackie & Son Limited. Cox, P.A. (1996) Introduction to Quantum Theory and Atomic Spectra. Oxford University Press. ISBN 0-19-855916 References External links MIT-Harvard Center for Ultracold Atoms Stanford QFARM Initiative for Quantum Science & Enginneering Joint Quantum Institute at University of Maryland and NIST Atomic Physics on the Internet JILA (Atomic Physics) ORNL Physics Division Atomic, molecular, and optical physics
Atomic physics
[ "Physics", "Chemistry" ]
1,662
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
1,206
https://en.wikipedia.org/wiki/Atomic%20orbital
In quantum mechanics, an atomic orbital () is a function describing the location and wave-like behavior of an electron in an atom. This function describes an electron's charge distribution around the atom's nucleus, and can be used to calculate the probability of finding an electron in a specific region around the nucleus. Each orbital in an atom is characterized by a set of values of three quantum numbers , , and , which respectively correspond to electron's energy, its orbital angular momentum, and its orbital angular momentum projected along a chosen axis (magnetic quantum number). The orbitals with a well-defined magnetic quantum number are generally complex-valued. Real-valued orbitals can be formed as linear combinations of and orbitals, and are often labeled using associated harmonic polynomials (e.g., xy, ) which describe their angular structure. An orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with their n values, are used to describe electron configurations of atoms. They are derived from description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between letters "i" and "j". Atomic orbitals are basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing submicroscopic behavior of electrons in matter. In this model, the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of periodic table arises naturally from total number of electrons that occupy a complete set of s, p, d, and f orbitals, respectively, though for higher values of quantum number , particularly when the atom bears a positive charge, energies of certain sub-shells become very similar and so, order in which they are said to be populated by electrons (e.g., Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can be rationalized only somewhat arbitrarily. Electron properties With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the electrons orbiting a nucleus could not be fully described as particles, but needed to be explained by wave–particle duality. In this sense, electrons have the following properties: Wave-like properties: Electrons do not orbit a nucleus in the manner of a planet orbiting a star, but instead exist as standing waves. Thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency. The electrons are never in a single point location, though the probability of interacting with the electron at a single point can be found from the electron's wave function. The electron's charge acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function. Particle-like properties: The number of electrons orbiting a nucleus can be only an integer. Electrons jump between orbitals like particles. For example, if one photon strikes the electrons, only one electron changes state as a result. Electrons retain particle-like properties such as: each wave state has the same electric charge as its electron particle. Each wave state has a single discrete spin (spin up or spin down) depending on its superposition. Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when one electron is present. When more electrons are added, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection ("electron cloud") tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle. One should remember that these orbital 'states', as described here, are merely eigenstates of an electron in its orbit. An actual electron exists in a superposition of states, which is like a weighted average, but with complex number weights. So, for instance, an electron could be in a pure eigenstate (2, 1, 0), or a mixed state (2, 1, 0) + (2, 1, 1), or even the mixed state (2, 1, 0) + (2, 1, 1). For each eigenstate, a property has an eigenvalue. So, for the three states just mentioned, the value of is 2, and the value of is 1. For the second and third states, the value for is a superposition of 0 and 1. As a superposition of states, it is ambiguous—either exactly 0 or exactly 1—not an intermediate or average value like the fraction . A superposition of eigenstates (2, 1, 1) and (3, 2, 1) would have an ambiguous and , but would definitely be 1. Eigenstates make it easier to deal with the math. You can choose a different basis of eigenstates by superimposing eigenstates from any other basis (see Real orbitals below). Formal quantum mechanical definition Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrödinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions. (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0). This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from each other. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large. Fundamentally, an atomic orbital is a one-electron wave function, even though many electrons are not in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. Types of orbital Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for orbitals are usually spherical coordinates in atoms and Cartesian in polyatomic molecules. The advantage of spherical coordinates here is that an orbital wave function is a product of three factors each dependent on a single coordinate: . The angular factors of atomic orbitals generate s, p, d, etc. functions as real combinations of spherical harmonics (where and are quantum numbers). There are typically three mathematical forms for the radial functions  which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons: The hydrogen-like orbitals are derived from the exact solutions of the Schrödinger equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on distance r from the nucleus has radial nodes and decays as . The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does a hydrogen-like orbital. The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as . Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like orbitals. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals. History The term orbital was introduced by Robert S. Mulliken in 1932 as short for one-electron orbital wave function. Niels Bohr explained around 1913 that electrons might revolve around a compact nucleus with definite angular momentum. Bohr's model was an improvement on the 1911 explanations of Ernest Rutherford, that of the electron moving around a nucleus. Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electron behavior as early as 1904. These theories were each built upon new observations starting with simple understanding and becoming more correct and complex. Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics. Early models With J. J. Thomson's discovery of the electron in 1897, it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolve in orbit-like rings within a positively charged jelly-like substance, and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure. Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time, and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation. Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. Bohr atom In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were permitted to have only discrete values of angular momentum, quantized in units ħ. This constraint automatically allowed only certain electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step toward the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum; so a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength. The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the n = 1 state can hold one or two electrons, while the n = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all n = 1 states are fully occupied; the same is true for n = 1 and n = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation for hydrogen) and remains empty. Modern conceptions and connections to the Heisenberg uncertainty principle Immediately after Heisenberg discovered his uncertainty principle, Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself. In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require infinite particle momentum. In chemistry, Erwin Schrödinger, Linus Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom. In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere in a three-dimensional atom and was pictured as the most probable energy of the probability cloud of the electron's wave packet which surrounded the atom. Orbital names Orbital notation and subshells Orbitals have been given names, which are usually given in the form: where X is the energy level corresponding to the principal quantum number ; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number . For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level () and has an angular quantum number of , denoted as s. Orbitals with are denoted as p, d and f respectively. The set of orbitals for a given n and is called a subshell, denoted . The superscript y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and = 1. X-ray notation There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For , the letters associated with those numbers are K, L, M, N, O, ... respectively. Hydrogen-like orbitals The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron (He+, Li2+, etc.) is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom). For atoms with two or more electrons, the governing equations can be solved only with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: , , and . The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of a hydrogen-like atom are its atomic orbitals. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method. The quantum number first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of  are even more closely related, and are said to comprise a "subshell". Quantum numbers Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers occur only in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed. Complex orbitals In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows: The principal quantum number describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells. The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where is some integer , ranges across all (integer) values satisfying the relation . For instance, the  shell has only orbitals with , and the  shell has only orbitals with , and . The set of orbitals associated with a particular value of  are sometimes collectively called a subshell. The magnetic quantum number, , describes the projection of the orbital angular momentum along a chosen axis. It determines the magnitude of the current circulating around that axis and the orbital contribution to the magnetic moment of an electron via the Ampèrian loop model. Within a subshell , obtains the integer values in the range . The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of available in that subshell. Empty cells represent subshells that do not exist. Subshells are usually identified by their - and -values. is represented by its numerical value, but is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with and as a '2s subshell'. Each electron also has angular momentum in the form of quantum mechanical spin given by spin s = . Its projection along a specified axis is given by the spin magnetic quantum number, ms, which can be + or −. These values are also called "spin up" or "spin down" respectively. The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (, , ), these two electrons must differ in their spin projection ms. The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing from . As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experimentwhere an atom is exposed to a magnetic fieldprovides one such example. Real orbitals Instead of the complex orbitals described above, it is common, especially in the chemistry literature, to use real atomic orbitals. These real orbitals arise from simple linear combinations of complex orbitals. Using the Condon–Shortley phase convention, real orbitals are related to complex orbitals in the same way that the real spherical harmonics are related to complex spherical harmonics. Letting denote a complex orbital with quantum numbers , , and , the real orbitals may be defined by If , with the radial part of the orbital, this definition is equivalent to where is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic . Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction. Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations. In real hydrogen-like orbitals, quantum numbers and have the same interpretation and significance as their complex counterparts, but is no longer a good quantum number (but its absolute value is). Some real orbitals are given specific names beyond the simple designation. Orbitals with quantum number are called orbitals. With this one can already assign names to complex orbitals such as ; the first symbol is the quantum number, the second character is the symbol for that particular quantum number and the subscript is the quantum number. As an example of how the full orbital names are generated for real orbitals, one may calculate . From the table of spherical harmonics, with . Then Likewise . As a more complicated example: In all these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in appearing in the numerator. We ignore any terms in the polynomial except for the term with the highest exponent in . We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the and quantum numbers. The expression above all use the Condon–Shortley phase convention which is favored by quantum physicists. Other conventions exist for the phase of the spherical harmonics. Under these different conventions the and orbitals may appear, for example, as the sum and difference of and , contrary to what is shown above. Below is a list of these Cartesian polynomial names for the atomic orbitals. There does not seem to be reference in the literature as to how to abbreviate the long Cartesian spherical harmonic polynomials for so there does not seem be consensus on the naming of orbitals or higher according to this nomenclature. Shapes of orbitals Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although as the square of an absolute value is everywhere non-negative, the sign of the wave function is often indicated in each subregion of the orbital picture. Sometimes the function is graphed to show its phases, rather than which shows probability density but has no phase (which is lost when taking absolute value, since is a complex number). orbital graphs tend to have less spherical, thinner lobes than graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, to show wave function phase, shows mostly graphs. The lobes can be seen as standing wave interference patterns between the two counter-rotating, ring-resonant traveling wave and modes; the projection of the orbital onto the xy plane has a resonant wavelength around the circumference. Although rarely shown, the traveling wave solutions can be seen as rotating banded tori; the bands represent phase information. For each there are two standing wave solutions and . If , the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. If there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. Nodal planes and nodal spheres are surfaces on which the probability density vanishes. The number of nodal surfaces is controlled by the quantum numbers and . An orbital with azimuthal quantum number has radial nodal planes passing through the origin. For example, the s orbitals () are spherically symmetric and have no nodal planes, whereas the p orbitals () have a single nodal plane between the lobes. The number of nodal spheres equals , consistent with the restriction on the quantum numbers. The principal quantum number controls the total number of nodal surfaces which is . Loosely speaking, is energy, is analogous to eccentricity, and is orientation. In general, determines size and energy of the orbital for a given nucleus; as increases, the size of the orbital increases. The higher nuclear charge of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the size of the atom remains very roughly constant, even as the number of electrons increases. Also in general terms, determines an orbital's shape, and its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on also. Together, the whole set of orbitals for a given and fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes. The single s orbitals () are shaped like spheres. For it is roughly a solid ball (densest at center and fades outward exponentially), but for , each single s orbital is made of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s orbitals for all numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy. Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the outcome (see the figure at right). The shapes of p, d and f orbitals are described verbally here and shown graphically in the Orbitals table below. The three p orbitals for have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of . The overall result is a lobe pointing along each direction of the primary axes. Four of the five d orbitals for look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the center along the x and y axes themselves. The fifth and final d orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair. There are seven f orbitals, each with shapes more complex than those of the d orbitals. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of further increase the number of radial nodes, for each type of orbital. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the and are the same shape. Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number of the same shell (e.g., all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ) is spherical. This is known as Unsöld's theorem. Orbitals table This table shows the real hydrogen-like wave functions for all atomic orbitals up to 7s, and therefore covers the occupied orbitals in the ground state of all elements in the periodic table up to radium and some beyond. "ψ" graphs are shown with − and + wave function phases shown in two different colors (arbitrarily red and blue). The orbital is the same as the orbital, but the and are formed by taking linear combinations of the and orbitals (which is why they are listed under the label). Also, the and are not the same shape as the , since they are pure spherical harmonics. * No elements with 6f, 7d or 7f electrons have been discovered yet. † Elements with 7p electrons have been discovered, but their electronic configurations are only predicted – save the exceptional Lr, which fills 7p1 instead of 6d1. ‡ For the elements whose highest occupied orbital is a 6d orbital, only some electronic configurations have been confirmed. (Mt, Ds, Rg and Cn are still missing). These are the real-valued orbitals commonly used in chemistry. Only the orbitals where are eigenstates of the orbital angular momentum operator, . The columns with are combinations of two eigenstates. See comparison in the following picture: Qualitative understanding of shapes The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum. To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism). This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum. A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity. Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system and the wave functions for a vibrating sphere are three-coordinate . None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it. In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non-radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons. Orbital energy In atoms with one electron (hydrogen-like atom), the energy of an orbital (and, consequently, any electron in the orbital) is determined mainly by . The orbital has the lowest possible energy in the atom. Each successively higher value of has a higher energy, but the difference decreases as increases. For high , the energy becomes so high that the electron can easily escape the atom. In single electron atoms, all levels with different within a given are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken slightly in the solution to the Dirac equation (where energy depends on and another quantum number ), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. In atoms with multiple electrons, the energy of an electron depends not only on its orbital, but also on its interactions with other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on but also on . Higher values of are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When , the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s orbital in the next higher shell; when the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms with higher atomic number, the of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers of electrons becomes less and less important in their energy placement. The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with and given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below. Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known. Electron placement and the periodic table Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as the spin magnetic quantum number . Thus, two electrons may occupy a single orbital, so long as they have different values of . Because takes one of only two values ( or ), at most two electrons can occupy each orbital. Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above. This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom. The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table: 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see ). The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. Relativistic effects For elements with high atomic number , the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high- atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy. Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium. In the Bohr model, an  electron has a velocity given by , where is the atomic number, is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of  due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical  value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes. pp hybridization (conjectured) In late period 8 elements, a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon. Transitions between orbitals Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus happen only if the photon has an energy corresponding with the exact energy difference between said states. Consider two states of the hydrogen atom: State , , and State , , and By quantum theory, state 1 has a fixed energy of , and state 2 has a fixed energy of . Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly . If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can jump only to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2. The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model. The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron. See also Atomic electron configuration table Condensed matter physics Electron configuration Energy level Hund's rules Molecular orbital Orbital overlap Quantum chemistry Quantum chemistry computer programs Solid-state physics Wave function collapse Wiswesser's rule References External links 3D representation of hydrogenic orbitals The Orbitron, a visualization of all common and uncommon atomic orbitals, from 1s to 7g Grand table Still images of many orbitals Atomic physics Chemical bonding Electron states Quantum chemistry Articles containing video clips
Atomic orbital
[ "Physics", "Chemistry", "Materials_science" ]
10,497
[ "Electron", "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Condensed matter physics", "Atomic physics", " molecular", "Atomic", "nan", "Chemical bonding", "Electron states", " and optical physics" ]
1,207
https://en.wikipedia.org/wiki/Amino%20acid
Amino acids are organic compounds that contain both amino and carboxylic acid functional groups. Although over 500 amino acids exist in nature, by far the most important are the 22 α-amino acids incorporated into proteins. Only these 22 appear in the genetic code of life. Amino acids can be classified according to the locations of the core structural functional groups (alpha- (α-), beta- (β-), gamma- (γ-) amino acids, etc.); other categories relate to polarity, ionization, and side-chain group type (aliphatic, acyclic, aromatic, polar, etc.). In the form of proteins, amino-acid residues form the second-largest component (water being the largest) of human muscles and other tissues. Beyond their role as residues in proteins, amino acids participate in a number of processes such as neurotransmitter transport and biosynthesis. It is thought that they played a key role in enabling life on Earth and its emergence. Amino acids are formally named by the IUPAC-IUBMB Joint Commission on Biochemical Nomenclature in terms of the fictitious "neutral" structure shown in the illustration. For example, the systematic name of alanine is 2-aminopropanoic acid, based on the formula . The Commission justified this approach as follows: The systematic names and formulas given refer to hypothetical forms in which amino groups are unprotonated and carboxyl groups are undissociated. This convention is useful to avoid various nomenclatural problems but should not be taken to imply that these structures represent an appreciable fraction of the amino-acid molecules. History The first few amino acids were discovered in the early 1800s. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound from asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovered in 1820. The last of the 20 common amino acids to be discovered was threonine in 1935 by William Cumming Rose, who also determined the essential amino acids and established the minimum daily requirements of all amino acids for optimal growth. The unity of the chemical category was recognized by Wurtz in 1865, but he gave no particular name to it. The first use of the term "amino acid" in the English language dates from 1898, while the German term, , was used earlier. Proteins were found to yield amino acids after enzymatic digestion or acid hydrolysis. In 1902, Emil Fischer and Franz Hofmeister independently proposed that proteins are formed from many amino acids, whereby bonds are formed between the amino group of one amino acid with the carboxyl group of another, resulting in a linear structure that Fischer termed "peptide". General structure 2-, alpha-, or α-amino acids have the generic formula in most cases, where R is an organic substituent known as a "side chain". Of the many hundreds of described amino acids, 22 are proteinogenic ("protein-building"). It is these 22 compounds that combine to give a vast array of peptides and proteins assembled by ribosomes. Non-proteinogenic or modified amino acids may arise from post-translational modification or during nonribosomal peptide synthesis. Chirality The carbon atom next to the carboxyl group is called the α–carbon. In proteinogenic amino acids, it bears the amine and the R group or side chain specific to each amino acid, as well as a hydrogen atom. With the exception of glycine, for which the side chain is also a hydrogen atom, the α–carbon is stereogenic. All chiral proteogenic amino acids have the L configuration. They are "left-handed" enantiomers, which refers to the stereoisomers of the alpha carbon. A few D-amino acids ("right-handed") have been found in nature, e.g., in bacterial envelopes, as a neuromodulator (D-serine), and in some antibiotics. Rarely, D-amino acid residues are found in proteins, and are converted from the L-amino acid as a post-translational modification. Side chains Polar charged side chains Five amino acids possess a charge at neutral pH. Often these side chains appear at the surfaces on proteins to enable their solubility in water, and side chains with opposite charges form important electrostatic contacts called salt bridges that maintain structures within a single protein or between interfacing proteins. Many proteins bind metal into their structures specifically, and these interactions are commonly mediated by charged side chains such as aspartate, glutamate and histidine. Under certain conditions, each ion-forming group can be charged, forming double salts. The two negatively charged amino acids at neutral pH are aspartate (Asp, D) and glutamate (Glu, E). The anionic carboxylate groups behave as Brønsted bases in most circumstances. Enzymes in very low pH environments, like the aspartic protease pepsin in mammalian stomachs, may have catalytic aspartate or glutamate residues that act as Brønsted acids. There are three amino acids with side chains that are cations at neutral pH: arginine (Arg, R), lysine (Lys, K) and histidine (His, H). Arginine has a charged guanidino group and lysine a charged alkyl amino group, and are fully protonated at pH 7. Histidine's imidazole group has a pKa of 6.0, and is only around 10% protonated at neutral pH. Because histidine is easily found in its basic and conjugate acid forms it often participates in catalytic proton transfers in enzyme reactions. Polar uncharged side chains The polar, uncharged amino acids serine (Ser, S), threonine (Thr, T), asparagine (Asn, N) and glutamine (Gln, Q) readily form hydrogen bonds with water and other amino acids. They do not ionize in normal conditions, a prominent exception being the catalytic serine in serine proteases. This is an example of severe perturbation, and is not characteristic of serine residues in general. Threonine has two chiral centers, not only the L (2S) chiral center at the α-carbon shared by all amino acids apart from achiral glycine, but also (3R) at the β-carbon. The full stereochemical specification is (2S,3R)-L-threonine. Hydrophobic side chains Nonpolar amino acid interactions are the primary driving force behind the processes that fold proteins into their functional three dimensional structures. None of these amino acids' side chains ionize easily, and therefore do not have pKas, with the exception of tyrosine (Tyr, Y). The hydroxyl of tyrosine can deprotonate at high pH forming the negatively charged phenolate. Because of this one could place tyrosine into the polar, uncharged amino acid category, but its very low solubility in water matches the characteristics of hydrophobic amino acids well. Special case side chains Several side chains are not described well by the charged, polar and hydrophobic categories. Glycine (Gly, G) could be considered a polar amino acid since its small size means that its solubility is largely determined by the amino and carboxylate groups. However, the lack of any side chain provides glycine with a unique flexibility among amino acids with large ramifications to protein folding. Cysteine (Cys, C) can also form hydrogen bonds readily, which would place it in the polar amino acid category, though it can often be found in protein structures forming covalent bonds, called disulphide bonds, with other cysteines. These bonds influence the folding and stability of proteins, and are essential in the formation of antibodies. Proline (Pro, P) has an alkyl side chain and could be considered hydrophobic, but because the side chain joins back onto the alpha amino group it becomes particularly inflexible when incorporated into proteins. Similar to glycine this influences protein structure in a way unique among amino acids. Selenocysteine (Sec, U) is a rare amino acid not directly encoded by DNA, but is incorporated into proteins via the ribosome. Selenocysteine has a lower redox potential compared to the similar cysteine, and participates in several unique enzymatic reactions. Pyrrolysine (Pyl, O) is another amino acid not encoded in DNA, but synthesized into protein by ribosomes. It is found in archaeal species where it participates in the catalytic activity of several methyltransferases. β- and γ-amino acids Amino acids with the structure , such as β-alanine, a component of carnosine and a few other peptides, are β-amino acids. Ones with the structure are γ-amino acids, and so on, where X and Y are two substituents (one of which is normally H). Zwitterions The common natural forms of amino acids have a zwitterionic structure, with ( in the case of proline) and functional groups attached to the same C atom, and are thus α-amino acids, and are the only ones found in proteins during translation in the ribosome. In aqueous solution at pH close to neutrality, amino acids exist as zwitterions, i.e. as dipolar ions with both and in charged states, so the overall structure is . At physiological pH the so-called "neutral forms" are not present to any measurable degree. Although the two charges in the zwitterion structure add up to zero it is misleading to call a species with a net charge of zero "uncharged". In strongly acidic conditions (pH below 3), the carboxylate group becomes protonated and the structure becomes an ammonio carboxylic acid, . This is relevant for enzymes like pepsin that are active in acidic environments such as the mammalian stomach and lysosomes, but does not significantly apply to intracellular enzymes. In highly basic conditions (pH greater than 10, not normally seen in physiological conditions), the ammonio group is deprotonated to give . Although various definitions of acids and bases are used in chemistry, the only one that is useful for chemistry in aqueous solution is that of Brønsted: an acid is a species that can donate a proton to another species, and a base is one that can accept a proton. This criterion is used to label the groups in the above illustration. The carboxylate side chains of aspartate and glutamate residues are the principal Brønsted bases in proteins. Likewise, lysine, tyrosine and cysteine will typically act as a Brønsted acid. Histidine under these conditions can act both as a Brønsted acid and a base. Isoelectric point For amino acids with uncharged side-chains the zwitterion predominates at pH values between the two pKa values, but coexists in equilibrium with small amounts of net negative and net positive ions. At the midpoint between the two pKa values, the trace amount of net negative and trace of net positive ions balance, so that average net charge of all forms present is zero. This pH is known as the isoelectric point pI, so pI = (pKa1 + pKa2). For amino acids with charged side chains, the pKa of the side chain is involved. Thus for aspartate or glutamate with negative side chains, the terminal amino group is essentially entirely in the charged form , but this positive charge needs to be balanced by the state with just one C-terminal carboxylate group is negatively charged. This occurs halfway between the two carboxylate pKa values: pI = (pKa1 + pKa(R)), where pKa(R) is the side chain pKa. Similar considerations apply to other amino acids with ionizable side-chains, including not only glutamate (similar to aspartate), but also cysteine, histidine, lysine, tyrosine and arginine with positive side chains. Amino acids have zero mobility in electrophoresis at their isoelectric point, although this behaviour is more usually exploited for peptides and proteins than single amino acids. Zwitterions have minimum solubility at their isoelectric point, and some amino acids (in particular, with nonpolar side chains) can be isolated by precipitation from water by adjusting the pH to the required isoelectric point. Physicochemical properties The 20 canonical amino acids can be classified according to their properties. Important factors are charge, hydrophilicity or hydrophobicity, size, and functional groups. These properties influence protein structure and protein–protein interactions. The water-soluble proteins tend to have their hydrophobic residues (Leu, Ile, Val, Phe, and Trp) buried in the middle of the protein, whereas hydrophilic side chains are exposed to the aqueous solvent. (In biochemistry, a residue refers to a specific monomer within the polymeric chain of a polysaccharide, protein or nucleic acid.) The integral membrane proteins tend to have outer rings of exposed hydrophobic amino acids that anchor them in the lipid bilayer. Some peripheral membrane proteins have a patch of hydrophobic amino acids on their surface that sticks to the membrane. In a similar fashion, proteins that have to bind to positively charged molecules have surfaces rich in negatively charged amino acids such as glutamate and aspartate, while proteins binding to negatively charged molecules have surfaces rich in positively charged amino acids like lysine and arginine. For example, lysine and arginine are present in large amounts in the low-complexity regions of nucleic-acid binding proteins. There are various hydrophobicity scales of amino acid residues. Some amino acids have special properties. Cysteine can form covalent disulfide bonds to other cysteine residues. Proline forms a cycle to the polypeptide backbone, and glycine is more flexible than other amino acids. Glycine and proline are strongly present within low complexity regions of both eukaryotic and prokaryotic proteins, whereas the opposite is the case with cysteine, phenylalanine, tryptophan, methionine, valine, leucine, isoleucine, which are highly reactive, or complex, or hydrophobic. Many proteins undergo a range of posttranslational modifications, whereby additional chemical groups are attached to the amino acid residue side chains sometimes producing lipoproteins (that are hydrophobic), or glycoproteins (that are hydrophilic) allowing the protein to attach temporarily to a membrane. For example, a signaling protein can attach and then detach from a cell membrane, because it contains cysteine residues that can have the fatty acid palmitic acid added to them and subsequently removed. Table of standard amino acid abbreviations and properties Although one-letter symbols are included in the table, IUPAC–IUBMB recommend that "Use of the one-letter symbols should be restricted to the comparison of long sequences". The one-letter notation was chosen by IUPAC-IUB based on the following rules: Initial letters are used where there is no ambuiguity: C cysteine, H histidine, I isoleucine, M methionine, S serine, V valine, Where arbitrary assignment is needed, the structurally simpler amino acids are given precedence: A Alanine, G glycine, L leucine, P proline, T threonine, F PHenylalanine and R aRginine are assigned by being phonetically suggestive, W tryptophan is assigned based on the double ring being visually suggestive to the bulky letter W, K lysine and Y tyrosine are assigned as alphabetically nearest to their initials L and T (note that U was avoided for its similarity with V, while X was reserved for undetermined or atypical amino acids); for tyrosine the mnemonic tYrosine was also proposed, D aspartate was assigned arbitrarily, with the proposed mnemonic asparDic acid; E glutamate was assigned in alphabetical sequence being larger by merely one methylene –CH2– group, N asparagine was assigned arbitrarily, with the proposed mnemonic asparagiNe; Q glutamine was assigned in alphabetical sequence of those still available (note again that O was avoided due to similarity with D), with the proposed mnemonic Qlutamine. Two additional amino acids are in some species coded for by codons that are usually interpreted as stop codons: In addition to the specific amino acid codes, placeholders are used in cases where chemical or crystallographic analysis of a peptide or protein cannot conclusively determine the identity of a residue. They are also used to summarize conserved protein sequence motifs. The use of single letters to indicate sets of similar residues is similar to the use of abbreviation codes for degenerate bases. Unk is sometimes used instead of Xaa, but is less standard. Ter or * (from termination) is used in notation for mutations in proteins when a stop codon occurs. It corresponds to no amino acid at all. In addition, many nonstandard amino acids have a specific code. For example, several peptide drugs, such as Bortezomib and MG132, are artificially synthesized and retain their protecting groups, which have specific codes. Bortezomib is Pyz–Phe–boroLeu, and MG132 is Z–Leu–Leu–Leu–al. To aid in the analysis of protein structure, photo-reactive amino acid analogs are available. These include photoleucine (pLeu) and photomethionine (pMet). Occurrence and functions in biochemistry Proteinogenic amino acids Amino acids are the precursors to proteins. They join by condensation reactions to form short polymer chains called peptides or longer chains called either polypeptides or proteins. These chains are linear and unbranched, with each amino acid residue within the chain attached to two neighboring amino acids. In nature, the process of making proteins encoded by RNA genetic material is called translation and involves the step-by-step addition of amino acids to a growing protein chain by a ribozyme that is called a ribosome. The order in which the amino acids are added is read through the genetic code from an mRNA template, which is an RNA derived from one of the organism's genes. Twenty-two amino acids are naturally incorporated into polypeptides and are called proteinogenic or natural amino acids. Of these, 20 are encoded by the universal genetic code. The remaining 2, selenocysteine and pyrrolysine, are incorporated into proteins by unique synthetic mechanisms. Selenocysteine is incorporated when the mRNA being translated includes a SECIS element, which causes the UGA codon to encode selenocysteine instead of a stop codon. Pyrrolysine is used by some methanogenic archaea in enzymes that they use to produce methane. It is coded for with the codon UAG, which is normally a stop codon in other organisms. Several independent evolutionary studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of amino acids that constituted the early genetic code, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of amino acids that constituted later additions of the genetic code. Standard vs nonstandard amino acids The 20 amino acids that are encoded directly by the codons of the universal genetic code are called standard or canonical amino acids. A modified form of methionine (N-formylmethionine) is often incorporated in place of methionine as the initial amino acid of proteins in bacteria, mitochondria and plastids (including chloroplasts). Other amino acids are called nonstandard or non-canonical. Most of the nonstandard amino acids are also non-proteinogenic (i.e. they cannot be incorporated into proteins during translation), but two of them are proteinogenic, as they can be incorporated translationally into proteins by exploiting information not encoded in the universal genetic code. The two nonstandard proteinogenic amino acids are selenocysteine (present in many non-eukaryotes as well as most eukaryotes, but not coded directly by DNA) and pyrrolysine (found only in some archaea and at least one bacterium). The incorporation of these nonstandard amino acids is rare. For example, 25 human proteins include selenocysteine in their primary structure, and the structurally characterized enzymes (selenoenzymes) employ selenocysteine as the catalytic moiety in their active sites. Pyrrolysine and selenocysteine are encoded via variant codons. For example, selenocysteine is encoded by stop codon and SECIS element. N-formylmethionine (which is often the initial amino acid of proteins in bacteria, mitochondria, and chloroplasts) is generally considered as a form of methionine rather than as a separate proteinogenic amino acid. Codon–tRNA combinations not found in nature can also be used to "expand" the genetic code and form novel proteins known as alloproteins incorporating non-proteinogenic amino acids. Non-proteinogenic amino acids Aside from the 22 proteinogenic amino acids, many non-proteinogenic amino acids are known. Those either are not found in proteins (for example carnitine, GABA, levothyroxine) or are not produced directly and in isolation by standard cellular machinery. For example, hydroxyproline, is synthesised from proline. Another example is selenomethionine). Non-proteinogenic amino acids that are found in proteins are formed by post-translational modification. Such modifications can also determine the localization of the protein, e.g., the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane. Examples: the carboxylation of glutamate allows for better binding of calcium cations, Hydroxyproline, generated by hydroxylation of proline, is a major component of the connective tissue collagen. Hypusine in the translation initiation factor EIF5A, contains a modification of lysine. Some non-proteinogenic amino acids are not found in proteins. Examples include 2-aminoisobutyric acid and the neurotransmitter gamma-aminobutyric acid. Non-proteinogenic amino acids often occur as intermediates in the metabolic pathways for standard amino acids – for example, ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below). A rare exception to the dominance of α-amino acids in biology is the β-amino acid beta alanine (3-aminopropanoic acid), which is used in plants and microorganisms in the synthesis of pantothenic acid (vitamin B5), a component of coenzyme A. In mammalian nutrition Amino acids are not typical component of food: animals eat proteins. The protein is broken down into amino acids in the process of digestion. They are then used to synthesize new proteins, other biomolecules, or are oxidized to urea and carbon dioxide as a source of energy. The oxidation pathway starts with the removal of the amino group by a transaminase; the amino group is then fed into the urea cycle. The other product of transamidation is a keto acid that enters the citric acid cycle. Glucogenic amino acids can also be converted into glucose, through gluconeogenesis. Of the 20 standard amino acids, nine (His, Ile, Leu, Lys, Met, Phe, Thr, Trp and Val) are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food. Semi-essential and conditionally essential amino acids, and juvenile requirements In addition, cysteine, tyrosine, and arginine are considered semiessential amino acids, and taurine a semi-essential aminosulfonic acid in children. Some amino acids are conditionally essential for certain ages or medical conditions. Essential amino acids may also vary from species to species. The metabolic pathways that synthesize these monomers are not fully developed. Non-protein functions Many proteinogenic and non-proteinogenic amino acids have biological functions beyond being precursors to proteins and peptides.In humans, amino acids also have important roles in diverse biosynthetic pathways. Defenses against herbivores in plants sometimes employ amino acids. Examples: Standard amino acids Tryptophan is a precursor of the neurotransmitter serotonin. Tyrosine (and its precursor phenylalanine) are precursors of the catecholamine neurotransmitters dopamine, epinephrine and norepinephrine and various trace amines. Phenylalanine is a precursor of phenethylamine and tyrosine in humans. In plants, it is a precursor of various phenylpropanoids, which are important in plant metabolism. Glycine is a precursor of porphyrins such as heme. Arginine is a precursor of nitric oxide. Ornithine and S-adenosylmethionine are precursors of polyamines. Aspartate, glycine, and glutamine are precursors of nucleotides. Roles for nonstandard amino acids Carnitine is used in lipid transport. gamma-aminobutyric acid is a neurotransmitter. 5-HTP (5-hydroxytryptophan) is used for experimental treatment of depression. L-DOPA (L-dihydroxyphenylalanine) for Parkinson's treatment, Eflornithine inhibits ornithine decarboxylase and used in the treatment of sleeping sickness. Canavanine, an analogue of arginine found in many legumes is an antifeedant, protecting the plant from predators. Mimosine found in some legumes, is another possible antifeedant. This compound is an analogue of tyrosine and can poison animals that graze on these plants. However, not all of the functions of other abundant nonstandard amino acids are known. Uses in industry Animal feed Amino acids are sometimes added to animal feed because some of the components of these feeds, such as soybeans, have low levels of some of the essential amino acids, especially of lysine, methionine, threonine, and tryptophan. Likewise amino acids are used to chelate metal cations in order to improve the absorption of minerals from feed supplements. Food The food industry is a major consumer of amino acids, especially glutamic acid, which is used as a flavor enhancer, and aspartame (aspartylphenylalanine 1-methyl ester), which is used as an artificial sweetener. Amino acids are sometimes added to food by manufacturers to alleviate symptoms of mineral deficiencies, such as anemia, by improving mineral absorption and reducing negative side effects from inorganic mineral supplementation. Chemical building blocks Amino acids are low-cost feedstocks used in chiral pool synthesis as enantiomerically pure building blocks. Amino acids are used in the synthesis of some cosmetics. Aspirational uses Fertilizer The chelating ability of amino acids is sometimes used in fertilizers to facilitate the delivery of minerals to plants in order to correct mineral deficiencies, such as iron chlorosis. These fertilizers are also used to prevent deficiencies from occurring and to improve the overall health of the plants. Biodegradable plastics Amino acids have been considered as components of biodegradable polymers, which have applications as environmentally friendly packaging and in medicine in drug delivery and the construction of prosthetic implants. An interesting example of such materials is polyaspartate, a water-soluble biodegradable polymer that may have applications in disposable diapers and agriculture. Due to its solubility and ability to chelate metal ions, polyaspartate is also being used as a biodegradable antiscaling agent and a corrosion inhibitor. Synthesis Chemical synthesis The commercial production of amino acids usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Some amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine for example. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase. Biosynthesis In plants, nitrogen is first assimilated into organic compounds in the form of glutamate, formed from alpha-ketoglutarate and ammonia in the mitochondrion. For other amino acids, plants use transaminases to move the amino group from glutamate to another alpha-keto acid. For example, aspartate aminotransferase converts glutamate and oxaloacetate to alpha-ketoglutarate and aspartate. Other organisms use transaminases for amino acid synthesis, too. Nonstandard amino acids are usually formed through modifications to standard amino acids. For example, homocysteine is formed through the transsulfuration pathway or by the demethylation of methionine via the intermediate metabolite S-adenosylmethionine, while hydroxyproline is made by a post translational modification of proline. Microorganisms and plants synthesize many uncommon amino acids. For example, some microbes make 2-aminoisobutyric acid and lanthionine, which is a sulfide-bridged derivative of alanine. Both of these amino acids are found in peptidic lantibiotics such as alamethicin. However, in plants, 1-aminocyclopropane-1-carboxylic acid is a small disubstituted cyclic amino acid that is an intermediate in the production of the plant hormone ethylene. Primordial synthesis The formation of amino acids and peptides is assumed to have preceded and perhaps induced the emergence of life on earth. Amino acids can form from simple precursors under various conditions. Surface-based chemical metabolism of amino acids and very small compounds may have led to the build-up of amino acids, coenzymes and phosphate-based small carbon molecules. Amino acids and similar building blocks could have been elaborated into proto-peptides, with peptides being considered key players in the origin of life. In the famous Urey-Miller experiment, the passage of an electric arc through a mixture of methane, hydrogen, and ammonia produces a large number of amino acids. Since then, scientists have discovered a range of ways and components by which the potentially prebiotic formation and chemical evolution of peptides may have occurred, such as condensing agents, the design of self-replicating peptides and a number of non-enzymatic mechanisms by which amino acids could have emerged and elaborated into peptides. Several hypotheses invoke the Strecker synthesis whereby hydrogen cyanide, simple aldehydes, ammonia, and water produce amino acids. According to a review, amino acids, and even peptides, "turn up fairly regularly in the various experimental broths that have been allowed to be cooked from simple chemicals. This is because nucleotides are far more difficult to synthesize chemically than amino acids." For a chronological order, it suggests that there must have been a 'protein world' or at least a 'polypeptide world', possibly later followed by the 'RNA world' and the 'DNA world'. Codon–amino acids mappings may be the biological information system at the primordial origin of life on Earth. While amino acids and consequently simple peptides must have formed under different experimentally probed geochemical scenarios, the transition from an abiotic world to the first life forms is to a large extent still unresolved. Reactions Amino acids undergo the reactions expected of the constituent functional groups. Peptide bond formation As both the amine and carboxylic acid groups of amino acids can react to form amide bonds, one amino acid molecule can react with another and become joined through an amide linkage. This polymerization of amino acids is what creates proteins. This condensation reaction yields the newly formed peptide bond and a molecule of water. In cells, this reaction does not occur directly; instead, the amino acid is first activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which catalyzes the attack of the amino group of the elongating protein chain on the ester bond. As a result of this mechanism, all proteins made by ribosomes are synthesized starting at their N-terminus and moving toward their C-terminus. However, not all peptide bonds are formed in this way. In a few cases, peptides are synthesized by specific enzymes. For example, the tripeptide glutathione is an essential part of the defenses of cells against oxidative stress. This peptide is synthesized in two steps from free amino acids. In the first step, gamma-glutamylcysteine synthetase condenses cysteine and glutamate through a peptide bond formed between the side chain carboxyl of the glutamate (the gamma carbon of this side chain) and the amino group of the cysteine. This dipeptide is then condensed with glycine by glutathione synthetase to form glutathione. In chemistry, peptides are synthesized by a variety of reactions. One of the most-used in solid-phase peptide synthesis uses the aromatic oxime derivatives of amino acids as activated units. These are added in sequence onto the growing peptide chain, which is attached to a solid resin support. Libraries of peptides are used in drug discovery through high-throughput screening. The combination of functional groups allow amino acids to be effective polydentate ligands for metal–amino acid chelates. The multiple side chains of amino acids can also undergo chemical reactions. Catabolism Degradation of an amino acid often involves deamination by moving its amino group to α-ketoglutarate, forming glutamate. This process involves transaminases, often the same as those used in amination during synthesis. In many vertebrates, the amino group is then removed through the urea cycle and is excreted in the form of urea. However, amino acid degradation can produce uric acid or ammonia instead. For example, serine dehydratase converts serine to pyruvate and ammonia. After removal of one or more amino groups, the remainder of the molecule can sometimes be used to synthesize new amino acids, or it can be used for energy by entering glycolysis or the citric acid cycle, as detailed in image at right. Complexation Amino acids are bidentate ligands, forming transition metal amino acid complexes. Chemical analysis The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available. See also Amino acid dating Beta-peptide Degron Erepsin Homochirality Hyperaminoacidemia Leucines Miller–Urey experiment Nucleic acid sequence RNA codon table Notes References Further reading External links Nitrogen cycle Zwitterions
Amino acid
[ "Physics", "Chemistry" ]
7,760
[ "Biomolecules by chemical classification", "Matter", "Amino acids", "Nitrogen cycle", "Zwitterions", "Metabolism", "Ions" ]
1,260
https://en.wikipedia.org/wiki/Advanced%20Encryption%20Standard
The Advanced Encryption Standard (AES), also known by its original name Rijndael (), is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001. AES is a variant of the Rijndael block cipher developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, who submitted a proposal to NIST during the AES selection process. Rijndael is a family of ciphers with different key and block sizes. For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits. AES has been adopted by the U.S. government. It supersedes the Data Encryption Standard (DES), which was published in 1977. The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data. In the United States, AES was announced by the NIST as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001. This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable. AES is included in the ISO/IEC 18033-3 standard. AES became effective as a U.S. federal government standard on May 26, 2002, after approval by U.S. Secretary of Commerce Donald Evans. AES is available in many different encryption packages, and is the first (and only) publicly accessible cipher approved by the U.S. National Security Agency (NSA) for top secret information when used in an NSA approved cryptographic module. Definitive standards The Advanced Encryption Standard (AES) is defined in each of: FIPS PUB 197: Advanced Encryption Standard (AES) ISO/IEC 18033-3: Block ciphers Description of the ciphers AES is based on a design principle known as a substitution–permutation network, and is efficient in both software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a variant of Rijndael, with a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits. By contrast, Rijndael per se is specified with block and key sizes that may be any multiple of 32 bits, with a minimum of 128 and a maximum of 256 bits. Most AES calculations are done in a particular finite field. AES operates on a 4 × 4 column-major order array of 16 bytes termed the state: The key size used for an AES cipher specifies the number of transformation rounds that convert the input, called the plaintext, into the final output, called the ciphertext. The number of rounds are as follows: 10 rounds for 128-bit keys. 12 rounds for 192-bit keys. 14 rounds for 256-bit keys. Each round consists of several processing steps, including one that depends on the encryption key itself. A set of reverse rounds are applied to transform ciphertext back into the original plaintext using the same encryption key. High-level description of the algorithm round keys are derived from the cipher key using the AES key schedule. AES requires a separate 128-bit round key block for each round plus one more. Initial round key addition: each byte of the state is combined with a byte of the round key using bitwise xor. 9, 11 or 13 rounds: a non-linear substitution step where each byte is replaced with another according to a lookup table. a transposition step where the last three rows of the state are shifted cyclically a certain number of steps. a linear mixing operation which operates on the columns of the state, combining the four bytes in each column. Final round (making 10, 12 or 14 rounds in total): The step In the step, each byte in the state array is replaced with a using an 8-bit substitution box. Before round 0, the state array is simply the plaintext/input. This operation provides the non-linearity in the cipher. The S-box used is derived from the multiplicative inverse over , known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed points (and so is a derangement), i.e., , and also any opposite fixed points, i.e., . While performing the decryption, the step (the inverse of ) is used, which requires first taking the inverse of the affine transformation and then finding the multiplicative inverse. The step The step operates on the rows of the state; it cyclically shifts the bytes in each row by a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. In this way, each column of the output state of the step is composed of bytes from each column of the input state. The importance of this step is to avoid the columns being encrypted independently, in which case AES would degenerate into four independent block ciphers. The step In the step, the four bytes of each column of the state are combined using an invertible linear transformation. The function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with , provides diffusion in the cipher. During this operation, each column is transformed using a fixed matrix (matrix left-multiplied by column gives new value of column in the state): Matrix multiplication is composed of multiplication and addition of the entries. Entries are bytes treated as coefficients of polynomial of order . Addition is simply XOR. Multiplication is modulo irreducible polynomial . If processed bit by bit, then, after shifting, a conditional XOR with 1B16 should be performed if the shifted value is larger than FF16 (overflow must be corrected by subtraction of generating polynomial). These are special cases of the usual multiplication in . In more general sense, each column is treated as a polynomial over and is then multiplied modulo with a fixed polynomial . The coefficients are displayed in their hexadecimal equivalent of the binary representation of bit polynomials from . The step can also be viewed as a multiplication by the shown particular MDS matrix in the finite field . This process is described further in the article Rijndael MixColumns. The In the step, the subkey is combined with the state. For each round, a subkey is derived from the main key using Rijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining of the state with the corresponding byte of the subkey using bitwise XOR. Optimization of the cipher On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining the and steps with the step by transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables (together occupying 4096 bytes). A round can then be performed with 16 table lookup operations and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in the step. Alternatively, the table lookup operation can be performed with a single 256-entry 32-bit table (occupying 1024 bytes) followed by circular rotation operations. Using a byte-oriented approach, it is possible to combine the , , and steps into a single round operation. Security The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protect classified information: The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use. AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys. By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys. Known attacks For cryptographers, a cryptographic "break" is anything faster than a brute-force attacki.e., performing one trial decryption for each possible key in sequence . A break can thus include results that are infeasible with current technology. Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns. The largest successful publicly known brute-force attack against a widely implemented block-cipher encryption algorithm was against a 64-bit RC5 key by distributed.net in 2006. The key space increases by a factor of 2 for each additional bit of key length, and if every possible value of the key is equiprobable; this translates into a doubling of the average brute-force key search time with every additional bit of key length. This implies that the effort of a brute-force search increases exponentially with key length. Key length in itself does not imply security against attacks, since there are ciphers with very long keys that have been found to be vulnerable. AES has a fairly simple algebraic framework. In 2002, a theoretical attack, named the "XSL attack", was announced by Nicolas Courtois and Josef Pieprzyk, purporting to show a weakness in the AES algorithm, partially due to the low complexity of its nonlinear components. Since then, other papers have shown that the attack, as originally presented, is unworkable; see XSL attack on block ciphers. During the AES selection process, developers of competing algorithms wrote of Rijndael's algorithm "we are concerned about [its] use ... in security-critical applications." In October 2000, however, at the end of the AES selection process, Bruce Schneier, a developer of the competing algorithm Twofish, wrote that while he thought successful academic attacks on Rijndael would be developed someday, he "did not believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic." Until May 2009, the only successful published attacks against the full AES were side-channel attacks on some specific implementations. In 2009, a new related-key attack was discovered that exploits the simplicity of AES's key schedule and has a complexity of 2119. In December 2009 it was improved to 299.5. This is a follow-up to an attack discovered earlier in 2009 by Alex Biryukov, Dmitry Khovratovich, and Ivica Nikolić, with a complexity of 296 for one out of every 235 keys. However, related-key attacks are not of concern in any properly designed cryptographic protocol, as a properly designed protocol (i.e., implementational software) will take care not to allow related keys, essentially by constraining an attacker's means of selecting keys for relatedness. Another attack was blogged by Bruce Schneier on July 30, 2009, and released as a preprint on August 3, 2009. This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is against AES-256 that uses only two related keys and 239 time to recover the complete 256-bit key of a 9-round version, or 245 time for a 10-round version with a stronger type of related subkey attack, or 270 time for an 11-round version. 256-bit AES uses 14 rounds, so these attacks are not effective against full AES. The practicality of these attacks with stronger related keys has been criticized, for instance, by the paper on chosen-key-relations-in-the-middle attacks on AES-128 authored by Vincent Rijmen in 2010. In November 2009, the first known-key distinguishing attack against a reduced 8-round version of AES-128 was released as a preprint. This known-key distinguishing attack is an improvement of the rebound, or the start-from-the-middle attack, against AES-like permutations, which view two consecutive rounds of permutation as the application of a so-called Super-S-box. It works on the 8-round version of AES-128, with a time complexity of 248, and a memory complexity of 232. 128-bit AES uses 10 rounds, so this attack is not effective against full AES-128. The first key-recovery attacks on full AES were by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011. The attack is a biclique attack and is faster than brute force by a factor of about four. It requires 2126.2 operations to recover an AES-128 key. For AES-192 and AES-256, 2190.2 and 2254.6 operations are needed, respectively. This result has been further improved to 2126.0 for AES-128, 2189.9 for AES-192, and 2254.3 for AES-256 by Biaoshuai Tao and Hongjun Wu in a 2015 paper, which are the current best results in key recovery attack against AES. This is a very small gain, as a 126-bit key (instead of 128 bits) would still take billions of years to brute force on current and foreseeable hardware. Also, the authors calculate the best attack using their technique on AES with a 128-bit key requires storing 288 bits of data. That works out to about 38 trillion terabytes of data, which was more than all the data stored on all the computers on the planet in 2016. A paper in 2015 later improved the space complexity to 256 bits, which is 9007 terabytes (while still keeping a time complexity of approximately 2126). According to the Snowden documents, the NSA is doing research on whether a cryptographic attack based on tau statistic may help to break AES. At present, there is no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES when correctly implemented. Side-channel attacks Side-channel attacks do not attack the cipher as a black box, and thus are not related to cipher security as defined in the classical context, but are important in practice. They attack implementations of the cipher on hardware or software systems that inadvertently leak data. There are several such known attacks on various implementations of AES. In April 2005, D. J. Bernstein announced a cache-timing attack that he used to break a custom server that used OpenSSL's AES encryption. The attack required over 200 million chosen plaintexts. The custom server was designed to give out as much timing information as possible (the server reports back the number of machine cycles taken by the encryption operation). However, as Bernstein pointed out, "reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples." In October 2005, Dag Arne Osvik, Adi Shamir and Eran Tromer presented a paper demonstrating several cache-timing attacks against the implementations in AES found in OpenSSL and Linux's dm-crypt partition encryption function. One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system or platform that is performing AES. In December 2009 an attack on some hardware implementations was published that used differential fault analysis and allows recovery of a key with a complexity of 232. In November 2010 Endre Bangerter, David Gullasch and Stephan Krenn published a paper which described a practical approach to a "near real time" recovery of secret keys from AES-128 without the need for either cipher text or plaintext. The approach also works on AES-128 implementations that use compression tables, such as OpenSSL. Like some earlier attacks, this one requires the ability to run unprivileged code on the system performing the AES encryption, which may be achieved by malware infection far more easily than commandeering the root account. In March 2016, C. Ashokkumar, Ravi Prakash Giri and Bernard Menezes presented a side-channel attack on AES implementations that can recover the complete 128-bit AES key in just 6–7 blocks of plaintext/ciphertext, which is a substantial improvement over previous works that require between 100 and a million encryptions. The proposed attack requires standard user privilege and key-retrieval algorithms run under a minute. Many modern CPUs have built-in hardware instructions for AES, which protect against timing-related side-channel attacks. Quantum attacks AES-256 is considered to be quantum resistant, as it has similar quantum resistance to AES-128's resistance against traditional, non-quantum, attacks at 128 bits of security. AES-192 and AES-128 are not considered quantum resistant due to their smaller key sizes. AES-192 has a strength of 96 bits against quantum attacks and AES-128 has 64 bits of strength against quantum attacks, making them both insecure. NIST/CSEC validation The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of cryptographic modules validated to NIST FIPS 140-2 is required by the United States Government for encryption of all data that has a classification of Sensitive but Unclassified (SBU) or above. From NSTISSP #11, National Policy Governing the Acquisition of Information Assurance: "Encryption products for protecting classified information will be certified by NSA, and encryption products intended for protecting sensitive information will be certified in accordance with NIST FIPS 140-2." The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments. Although NIST publication 197 ("FIPS 197") is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such as Triple DES or SHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an "FIPS approved: AES" notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules. The Cryptographic Algorithm Validation Program (CAVP) allows for independent validation of the correct implementation of the AES algorithm. Successful validation results in being listed on the NIST validations page. This testing is a pre-requisite for the FIPS 140-2 module validation. However, successful CAVP validation in no way implies that the cryptographic module implementing the algorithm is secure. A cryptographic module lacking FIPS 140-2 validation or specific approval by the NSA is not deemed secure by the US Government and cannot be used to protect government data. FIPS 140-2 validation is challenging to achieve both technically and fiscally. There is a standardized battery of tests as well as an element of source code review that must be passed over a period of a few weeks. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $30,000 US) and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be re-submitted and re-evaluated if they are changed in any way. This can vary from simple paperwork updates if the security functionality did not change to a more substantial set of re-testing if the security functionality was impacted by the change. Test vectors Test vectors are a set of known ciphers for a given input and key. NIST distributes the reference of AES test vectors as AES Known Answer Test (KAT) Vectors. Performance High speed and low RAM requirements were some of the criteria of the AES selection process. As the chosen algorithm, AES performed well on a wide variety of hardware, from 8-bit smart cards to high-performance computers. On a Pentium Pro, AES encryption requires 18 clock cycles per byte (cpb), equivalent to a throughput of about 11 MiB/s for a 200 MHz processor. On Intel Core and AMD Ryzen CPUs supporting AES-NI instruction set extensions, throughput can be multiple GiB/s. On an Intel Westmere CPU, AES encryption using AES-NI takes about 1.3 cpb for AES-128 and 1.8 cpb for AES-256. Implementations See also AES modes of operation Disk encryption Whirlpool – hash function created by Vincent Rijmen and Paulo S. L. M. Barreto List of free and open-source software packages Notes References alternate link (companion web site contains online lectures on AES) External links AES algorithm archive information – (old, unmaintained) Animation of Rijndael – AES deeply explained and animated using Flash (by Enrique Zabala / University ORT / Montevideo / Uruguay). This animation (in English, Spanish, and German) is also part of CrypTool 1 (menu Indiv. Procedures → Visualization of Algorithms → AES). HTML5 Animation of Rijndael – Same Animation as above made in HTML5. Advanced Encryption Standard Cryptography
Advanced Encryption Standard
[ "Mathematics", "Engineering" ]
4,686
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
1,313
https://en.wikipedia.org/wiki/Aromatic%20compound
Aromatic compounds or arenes are organic compounds "with a chemistry typified by benzene" and "cyclically conjugated." The word "aromatic" originates from the past grouping of molecules based on odor, before their general chemical properties were understood. The current definition of aromatic compounds does not have any relation to their odor. Aromatic compounds are now defined as cyclic compounds satisfying Hückel's Rule. Aromatic compounds have the following general properties: Typically unreactive Often non polar and hydrophobic High carbon-hydrogen ratio Burn with a strong sooty yellow flame, due to high C:H ratio Undergo electrophilic substitution reactions and nucleophilic aromatic substitutions Arenes are typically split into two categories - benzoids, that contain a benzene derivative and follow the benzene ring model, and non-benzoids that contain other aromatic cyclic derivatives. Aromatic compounds are commonly used in organic synthesis and are involved in many reaction types, following both additions and removals, as well as saturation and dearomatization. Heteroarenes Heteroarenes are aromatic compounds, where at least one methine or vinylene (-C= or -CH=CH-) group is replaced by a heteroatom: oxygen, nitrogen, or sulfur. Examples of non-benzene compounds with aromatic properties are furan, a heterocyclic compound with a five-membered ring that includes a single oxygen atom, and pyridine, a heterocyclic compound with a six-membered ring containing one nitrogen atom. Hydrocarbons without an aromatic ring are called aliphatic. Approximately half of compounds known in 2000 are described as aromatic to some extent. Applications Aromatic compounds are pervasive in nature and industry. Key industrial aromatic hydrocarbons are benzene, toluene, xylene called BTX. Many biomolecules have phenyl groups including the so-called aromatic amino acids. Benzene ring model Benzene, C6H6, is the least complex aromatic hydrocarbon, and it was the first one defined as such. Its bonding nature was first recognized independently by Joseph Loschmidt and August Kekulé in the 19th century. Each carbon atom in the hexagonal cycle has four electrons to share. One electron forms a sigma bond with the hydrogen atom, and one is used in covalently bonding to each of the two neighboring carbons. This leaves six electrons, shared equally around the ring in delocalized pi molecular orbitals the size of the ring itself. This represents the equivalent nature of the six carbon-carbon bonds all of bond order 1.5. This equivalency can also explained by resonance forms. The electrons are visualized as floating above and below the ring, with the electromagnetic fields they generate acting to keep the ring flat. The circle symbol for aromaticity was introduced by Sir Robert Robinson and his student James Armit in 1925 and popularized starting in 1959 by the Morrison & Boyd textbook on organic chemistry. The proper use of the symbol is debated: some publications use it to any cyclic π system, while others use it only for those π systems that obey Hückel's rule. Some argue that, in order to stay in line with Robinson's originally intended proposal, the use of the circle symbol should be limited to monocyclic 6 π-electron systems. In this way the circle symbol for a six-center six-electron bond can be compared to the Y symbol for a three-center two-electron bond. Benzene and derivatives of benzene Benzene derivatives have from one to six substituents attached to the central benzene core. Examples of benzene compounds with just one substituent are phenol, which carries a hydroxyl group, and toluene with a methyl group. When there is more than one substituent present on the ring, their spatial relationship becomes important for which the arene substitution patterns ortho, meta, and para are devised. When reacting to form more complex benzene derivatives, the substituents on a benzene ring can be described as either activated or deactivated, which are electron donating and electron withdrawing respectively. Activators are known as ortho-para directors, and deactivators are known as meta directors. Upon reacting, substituents will be added at the ortho, para or meta positions, depending on the directivity of the current substituents to make more complex benzene derivatives, often with several isomers. Electron flow leading to re-aromatization is key in ensuring the stability of such products. For example, three isomers exist for cresol because the methyl group and the hydroxyl group (both ortho para directors) can be placed next to each other (ortho), one position removed from each other (meta), or two positions removed from each other (para). Given that both the methyl and hydroxyl group are ortho-para directors, the ortho and para isomers are typically favoured. Xylenol has two methyl groups in addition to the hydroxyl group, and, for this structure, 6 isomers exist. Arene rings can stabilize charges, as seen in, for example, phenol (C6H5–OH), which is acidic at the hydroxyl (OH), as charge on the oxygen (alkoxide –O−) is partially delocalized into the benzene ring. Non-benzylic arenes Although benzylic arenes are common, non-benzylic compounds are also exceedingly important. Any compound containing a cyclic portion that conforms to Hückel's rule and is not a benzene derivative can be considered a non-benzylic aromatic compound. Monocyclic arenes Of annulenes larger than benzene, [12]annulene and [14]annulene are weakly aromatic compounds and [18]annulene, Cyclooctadecanonaene, is aromatic, though strain within the structure causes a slight deviation from the precisely planar structure necessary for aromatic categorization. Another example of a non-benzylic monocyclic arene is the cyclopropenyl (cyclopropenium cation), which satisfies Hückel's rule with an n equal to 0. Note, only the cationic form of this cyclic propenyl is aromatic, given that neutrality in this compound would violate either the octet rule or Hückel's rule. Other non-benzylic monocyclic arenes include the aforementioned heteroarenes that can replace carbon atoms with other heteroatoms such as N, O or S. Common examples of these are the six-membered pyrrole and five-membered pyridine, both of which have a substituted nitrogen Polycyclic aromatic hydrocarbons Polycyclic aromatic hydrocarbons, also known as polynuclear aromatic compounds (PAHs) are aromatic hydrocarbons that consist of fused aromatic rings and do not contain heteroatoms or carry substituents. Naphthalene is the simplest example of a PAH. PAHs occur in oil, coal, and tar deposits, and are produced as byproducts of fuel burning (whether fossil fuel or biomass). As pollutants, they are of concern because some compounds have been identified as carcinogenic, mutagenic, and teratogenic. PAHs are also found in cooked foods. Studies have shown that high levels of PAHs are found, for example, in meat cooked at high temperatures such as grilling or barbecuing, and in smoked fish. They are also a good candidate molecule to act as a basis for the earliest forms of life. In graphene the PAH motif is extended to large 2D sheets. Reactions Aromatic ring systems participate in many organic reactions. Substitution In aromatic substitution, one substituent on the arene ring, usually hydrogen, is replaced by another reagent. The two main types are electrophilic aromatic substitution, when the active reagent is an electrophile, and nucleophilic aromatic substitution, when the reagent is a nucleophile. In radical-nucleophilic aromatic substitution, the active reagent is a radical. An example of electrophilic aromatic substitution is the nitration of salicylic acid, where a nitro group is added para to the hydroxide substituent: Nucleophilic aromatic substitution involves displacement of a leaving group, such as a halide, on an aromatic ring. Aromatic rings usually nucleophilic, but in the presence of electron-withdrawing groups aromatic compounds undergo nucleophilic substitution. Mechanistically, this reaction differs from a common SN2 reaction, because it occurs at a trigonal carbon atom (sp2 hybridization). Hydrogenation Hydrogenation of arenes create saturated rings. The compound 1-naphthol is completely reduced to a mixture of decalin-ol isomers. The compound resorcinol, hydrogenated with Raney nickel in presence of aqueous sodium hydroxide forms an enolate which is alkylated with methyl iodide to 2-methyl-1,3-cyclohexandione: Dearomatization In dearomatization reactions the aromaticity of the reactant is lost. In this regard, the dearomatization is related to hydrogenation. A classic approach is Birch reduction. The methodology is used in synthesis. See also Aromatic substituents: Aryl, Aryloxy and Arenediyl Asphaltene Hydrodealkylation Simple aromatic rings Rhodium-platinum oxide, a catalyst used to hydrogenate aromatic compounds. References External links
Aromatic compound
[ "Chemistry" ]
2,045
[ "Organic compounds", "Aromatic compounds" ]
1,800
https://en.wikipedia.org/wiki/Adenosine%20triphosphate
Adenosine triphosphate (ATP) is a nucleoside triphosphate that provides energy to drive and support many processes in living cells, such as muscle contraction, nerve impulse propagation, and chemical synthesis. Found in all known forms of life, it is often referred to as the "molecular unit of currency" for intracellular energy transfer. When consumed in a metabolic process, ATP converts either to adenosine diphosphate (ADP) or to adenosine monophosphate (AMP). Other processes regenerate ATP. It is also a precursor to DNA and RNA, and is used as a coenzyme. An average adult human processes around 50 kilograms (about 100 moles) daily. From the perspective of biochemistry, ATP is classified as a nucleoside triphosphate, which indicates that it consists of three components: a nitrogenous base (adenine), the sugar ribose, and the triphosphate. Structure ATP consists of an adenine attached by the #9-nitrogen atom to the 1′ carbon atom of a sugar (ribose), which in turn is attached at the 5' carbon atom of the sugar to a triphosphate group. In its many reactions related to metabolism, the adenine and sugar groups remain unchanged, but the triphosphate is converted to di- and monophosphate, giving respectively the derivatives ADP and AMP. The three phosphoryl groups are labeled as alpha (α), beta (β), and, for the terminal phosphate, gamma (γ). In neutral solution, ionized ATP exists mostly as ATP4−, with a small proportion of ATP3−. Metal cation binding Polyanionic and featuring a potentially chelating polyphosphate group, ATP binds metal cations with high affinity. The binding constant for is (). The binding of a divalent cation, almost always magnesium, strongly affects the interaction of ATP with various proteins. Due to the strength of the ATP-Mg2+ interaction, ATP exists in the cell mostly as a complex with bonded to the phosphate oxygen centers. A second magnesium ion is critical for ATP binding in the kinase domain. The presence of Mg2+ regulates kinase activity. It is interesting from an RNA world perspective that ATP can carry a Mg ion which catalyzes RNA polymerization. Chemical properties Salts of ATP can be isolated as colorless solids. ATP is stable in aqueous solutions between pH 6.8 and 7.4 (in the absence of catalysts). At more extreme pH levels, it rapidly hydrolyses to ADP and phosphate. Living cells maintain the ratio of ATP to ADP at a point ten orders of magnitude from equilibrium, with ATP concentrations fivefold higher than the concentration of ADP. In the context of biochemical reactions, the P-O-P bonds are frequently referred to as high-energy bonds. Reactive aspects The hydrolysis of ATP into ADP and inorganic phosphate ATP(aq) + (l) = ADP(aq) + HPO(aq) + H(aq) releases of enthalpy. This may differ under physiological conditions if the reactant and products are not exactly in these ionization states. The values of the free energy released by cleaving either a phosphate (Pi) or a pyrophosphate (PPi) unit from ATP at standard state concentrations of 1 mol/L at pH 7 are: ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol) ATP + → AMP + PPi ΔG°' = −45.6 kJ/mol (−10.9 kcal/mol) These abbreviated equations at a pH near 7 can be written more explicitly (R = adenosyl): [RO-P(O)2-O-P(O)2-O-PO3]4− + → [RO-P(O)2-O-PO3]3− + [HPO4]2− + H+ [RO-P(O)2-O-P(O)2-O-PO3]4− + → [RO-PO3]2− + [HO3P-O-PO3]3− + H+ At cytoplasmic conditions, where the ADP/ATP ratio is 10 orders of magnitude from equilibrium, the ΔG is around −57 kJ/mol. Along with pH, the free energy change of ATP hydrolysis is also associated with Mg2+ concentration, from ΔG°' = −35.7 kJ/mol at a Mg2+ concentration of zero, to ΔG°' = −31 kJ/mol at [Mg2+] = 5 mM. Higher concentrations of Mg2+ decrease free energy released in the reaction due to binding of Mg2+ ions to negatively charged oxygen atoms of ATP at pH 7. Production from AMP and ADP Production, aerobic conditions A typical intracellular concentration of ATP may be 1–10 μmol per gram of tissue in a variety of eukaryotes. The dephosphorylation of ATP and rephosphorylation of ADP and AMP occur repeatedly in the course of aerobic metabolism. ATP can be produced by a number of distinct cellular processes; the three main pathways in eukaryotes are (1) glycolysis, (2) the citric acid cycle/oxidative phosphorylation, and (3) beta-oxidation. The overall process of oxidizing glucose to carbon dioxide, the combination of pathways 1 and 2, known as cellular respiration, produces about 30 equivalents of ATP from each molecule of glucose. ATP production by a non-photosynthetic aerobic eukaryote occurs mainly in the mitochondria, which comprise nearly 25% of the volume of a typical cell. Glycolysis In glycolysis, glucose and glycerol are metabolized to pyruvate. Glycolysis generates two equivalents of ATP through substrate phosphorylation catalyzed by two enzymes, phosphoglycerate kinase (PGK) and pyruvate kinase. Two equivalents of nicotinamide adenine dinucleotide (NADH) are also produced, which can be oxidized via the electron transport chain and result in the generation of additional ATP by ATP synthase. The pyruvate generated as an end-product of glycolysis is a substrate for the Krebs Cycle. Glycolysis is viewed as consisting of two phases with five steps each. In phase 1, "the preparatory phase", glucose is converted to 2 d-glyceraldehyde-3-phosphate (g3p). One ATP is invested in Step 1, and another ATP is invested in Step 3. Steps 1 and 3 of glycolysis are referred to as "Priming Steps". In Phase 2, two equivalents of g3p are converted to two pyruvates. In Step 7, two ATP are produced. Also, in Step 10, two further equivalents of ATP are produced. In Steps 7 and 10, ATP is generated from ADP. A net of two ATPs is formed in the glycolysis cycle. The glycolysis pathway is later associated with the Citric Acid Cycle which produces additional equivalents of ATP. Regulation In glycolysis, hexokinase is directly inhibited by its product, glucose-6-phosphate, and pyruvate kinase is inhibited by ATP itself. The main control point for the glycolytic pathway is phosphofructokinase (PFK), which is allosterically inhibited by high concentrations of ATP and activated by high concentrations of AMP. The inhibition of PFK by ATP is unusual since ATP is also a substrate in the reaction catalyzed by PFK; the active form of the enzyme is a tetramer that exists in two conformations, only one of which binds the second substrate fructose-6-phosphate (F6P). The protein has two binding sites for ATP – the active site is accessible in either protein conformation, but ATP binding to the inhibitor site stabilizes the conformation that binds F6P poorly. A number of other small molecules can compensate for the ATP-induced shift in equilibrium conformation and reactivate PFK, including cyclic AMP, ammonium ions, inorganic phosphate, and fructose-1,6- and -2,6-biphosphate. Citric acid cycle In the mitochondrion, pyruvate is oxidized by the pyruvate dehydrogenase complex to the acetyl group, which is fully oxidized to carbon dioxide by the citric acid cycle (also known as the Krebs cycle). Every "turn" of the citric acid cycle produces two molecules of carbon dioxide, one equivalent of ATP guanosine triphosphate (GTP) through substrate-level phosphorylation catalyzed by succinyl-CoA synthetase, as succinyl-CoA is converted to succinate, three equivalents of NADH, and one equivalent of FADH2. NADH and FADH2 are recycled (to NAD+ and FAD, respectively) by oxidative phosphorylation, generating additional ATP. The oxidation of NADH results in the synthesis of 2–3 equivalents of ATP, and the oxidation of one FADH2 yields between 1–2 equivalents of ATP. The majority of cellular ATP is generated by this process. Although the citric acid cycle itself does not involve molecular oxygen, it is an obligately aerobic process because O2 is used to recycle the NADH and FADH2. In the absence of oxygen, the citric acid cycle ceases. The generation of ATP by the mitochondrion from cytosolic NADH relies on the malate-aspartate shuttle (and to a lesser extent, the glycerol-phosphate shuttle) because the inner mitochondrial membrane is impermeable to NADH and NAD+. Instead of transferring the generated NADH, a malate dehydrogenase enzyme converts oxaloacetate to malate, which is translocated to the mitochondrial matrix. Another malate dehydrogenase-catalyzed reaction occurs in the opposite direction, producing oxaloacetate and NADH from the newly transported malate and the mitochondrion's interior store of NAD+. A transaminase converts the oxaloacetate to aspartate for transport back across the membrane and into the intermembrane space. In oxidative phosphorylation, the passage of electrons from NADH and FADH2 through the electron transport chain releases the energy to pump protons out of the mitochondrial matrix and into the intermembrane space. This pumping generates a proton motive force that is the net effect of a pH gradient and an electric potential gradient across the inner mitochondrial membrane. Flow of protons down this potential gradient – that is, from the intermembrane space to the matrix – yields ATP by ATP synthase. Three ATP are produced per turn. Although oxygen consumption appears fundamental for the maintenance of the proton motive force, in the event of oxygen shortage (hypoxia), intracellular acidosis (mediated by enhanced glycolytic rates and ATP hydrolysis), contributes to mitochondrial membrane potential and directly drives ATP synthesis. Most of the ATP synthesized in the mitochondria will be used for cellular processes in the cytosol; thus it must be exported from its site of synthesis in the mitochondrial matrix. ATP outward movement is favored by the membrane's electrochemical potential because the cytosol has a relatively positive charge compared to the relatively negative matrix. For every ATP transported out, it costs 1 H+. Producing one ATP costs about 3 H+. Therefore, making and exporting one ATP requires 4H+. The inner membrane contains an antiporter, the ADP/ATP translocase, which is an integral membrane protein used to exchange newly synthesized ATP in the matrix for ADP in the intermembrane space. Regulation The citric acid cycle is regulated mainly by the availability of key substrates, particularly the ratio of NAD+ to NADH and the concentrations of calcium, inorganic phosphate, ATP, ADP, and AMP. Citrate – the ion that gives its name to the cycle – is a feedback inhibitor of citrate synthase and also inhibits PFK, providing a direct link between the regulation of the citric acid cycle and glycolysis. Beta oxidation In the presence of air and various cofactors and enzymes, fatty acids are converted to acetyl-CoA. The pathway is called beta-oxidation. Each cycle of beta-oxidation shortens the fatty acid chain by two carbon atoms and produces one equivalent each of acetyl-CoA, NADH, and FADH2. The acetyl-CoA is metabolized by the citric acid cycle to generate ATP, while the NADH and FADH2 are used by oxidative phosphorylation to generate ATP. Dozens of ATP equivalents are generated by the beta-oxidation of a single long acyl chain. Regulation In oxidative phosphorylation, the key control point is the reaction catalyzed by cytochrome c oxidase, which is regulated by the availability of its substrate – the reduced form of cytochrome c. The amount of reduced cytochrome c available is directly related to the amounts of other substrates: which directly implies this equation: Thus, a high ratio of [NADH] to [NAD+] or a high ratio of [ADP] [Pi] to [ATP] imply a high amount of reduced cytochrome c and a high level of cytochrome c oxidase activity. An additional level of regulation is introduced by the transport rates of ATP and NADH between the mitochondrial matrix and the cytoplasm. Ketosis Ketone bodies can be used as fuels, yielding 22 ATP and 2 GTP molecules per acetoacetate molecule when oxidized in the mitochondria. Ketone bodies are transported from the liver to other tissues, where acetoacetate and beta-hydroxybutyrate can be reconverted to acetyl-CoA to produce reducing equivalents (NADH and FADH2), via the citric acid cycle. Ketone bodies cannot be used as fuel by the liver, because the liver lacks the enzyme β-ketoacyl-CoA transferase, also called thiolase. Acetoacetate in low concentrations is taken up by the liver and undergoes detoxification through the methylglyoxal pathway which ends with lactate. Acetoacetate in high concentrations is absorbed by cells other than those in the liver and enters a different pathway via 1,2-propanediol. Though the pathway follows a different series of steps requiring ATP, 1,2-propanediol can be turned into pyruvate. Production, anaerobic conditions Fermentation is the metabolism of organic compounds in the absence of air. It involves substrate-level phosphorylation in the absence of a respiratory electron transport chain. The equation for the reaction of glucose to form lactic acid is: + 2 ADP + 2 Pi → 2  + 2 ATP + 2  Anaerobic respiration is respiration in the absence of . Prokaryotes can utilize a variety of electron acceptors. These include nitrate, sulfate, and carbon dioxide. ATP replenishment by nucleoside diphosphate kinases ATP can also be synthesized through several so-called "replenishment" reactions catalyzed by the enzyme families of nucleoside diphosphate kinases (NDKs), which use other nucleoside triphosphates as a high-energy phosphate donor, and the ATP:guanido-phosphotransferase family. ATP production during photosynthesis In plants, ATP is synthesized in the thylakoid membrane of the chloroplast. The process is called photophosphorylation. The "machinery" is similar to that in mitochondria except that light energy is used to pump protons across a membrane to produce a proton-motive force. ATP synthase then ensues exactly as in oxidative phosphorylation. Some of the ATP produced in the chloroplasts is consumed in the Calvin cycle, which produces triose sugars. ATP recycling The total quantity of ATP in the human body is about 0.1 mol/L. The majority of ATP is recycled from ADP by the aforementioned processes. Thus, at any given time, the total amount of ATP + ADP remains fairly constant. The energy used by human cells in an adult requires the hydrolysis of 100 to 150 mol/L of ATP daily, which means a human will typically use their body weight worth of ATP over the course of the day. Each equivalent of ATP is recycled 1000–1500 times during a single day (), at approximately 9×1020 molecules/s. Biochemical functions Intracellular signaling ATP is involved in signal transduction by serving as substrate for kinases, enzymes that transfer phosphate groups. Kinases are the most common ATP-binding proteins. They share a small number of common folds. Phosphorylation of a protein by a kinase can activate a cascade such as the mitogen-activated protein kinase cascade. ATP is also a substrate of adenylate cyclase, most commonly in G protein-coupled receptor signal transduction pathways and is transformed to second messenger, cyclic AMP, which is involved in triggering calcium signals by the release of calcium from intracellular stores. This form of signal transduction is particularly important in brain function, although it is involved in the regulation of a multitude of other cellular processes. DNA and RNA synthesis ATP is one of four monomers required in the synthesis of RNA. The process is promoted by RNA polymerases. A similar process occurs in the formation of DNA, except that ATP is first converted to the deoxyribonucleotide dATP. Like many condensation reactions in nature, DNA replication and DNA transcription also consume ATP. Amino acid activation in protein synthesis Aminoacyl-tRNA synthetase enzymes consume ATP in the attachment tRNA to amino acids, forming aminoacyl-tRNA complexes. Aminoacyl transferase binds AMP-amino acid to tRNA. The coupling reaction proceeds in two steps: aa + ATP ⟶ aa-AMP + PPi aa-AMP + tRNA ⟶ aa-tRNA + AMP The amino acid is coupled to the penultimate nucleotide at the 3′-end of the tRNA (the A in the sequence CCA) via an ester bond (roll over in illustration). ATP binding cassette transporter Transporting chemicals out of a cell against a gradient is often associated with ATP hydrolysis. Transport is mediated by ATP binding cassette transporters. The human genome encodes 48 ABC transporters, that are used for exporting drugs, lipids, and other compounds. Extracellular signalling and neurotransmission Cells secrete ATP to communicate with other cells in a process called purinergic signalling. ATP serves as a neurotransmitter in many parts of the nervous system, modulates ciliary beating, affects vascular oxygen supply etc. ATP is either secreted directly across the cell membrane through channel proteins or is pumped into vesicles which then fuse with the membrane. Cells detect ATP using the purinergic receptor proteins P2X and P2Y. ATP has been shown to be a critically important signalling molecule for microglia - neuron interactions in the adult brain, as well as during brain development. Furthermore, tissue-injury induced ATP-signalling is a major factor in rapid microglial phenotype changes. Muscle contraction ATP fuels muscle contractions. Muscle contractions are regulated by signaling pathways, although different muscle types being regulated by specific pathways and stimuli based on their particular function. However, in all muscle types, contraction is performed by the proteins actin and myosin. ATP is initially bound to myosin. When ATPase hydrolyzes the bound ATP into ADP and inorganic phosphate, myosin is positioned in a way that it can bind to actin. Myosin bound by ADP and Pi forms cross-bridges with actin and the subsequent release of ADP and Pi releases energy as the power stroke. The power stroke causes actin filament to slide past the myosin filament, shortening the muscle and causing a contraction. Another ATP molecule can then bind to myosin, releasing it from actin and allowing this process to repeat. Protein solubility ATP has recently been proposed to act as a biological hydrotrope and has been shown to affect proteome-wide solubility. Abiogenic origins Acetyl phosphate (AcP), a precursor to ATP, can readily be synthesized at modest yields from thioacetate in pH 7 and 20 °C and pH 8 and 50 °C, although acetyl phosphate is less stable in warmer temperatures and alkaline conditions than in cooler and acidic to neutral conditions. It is unable to promote polymerization of ribonucleotides and amino acids and was only capable of phosphorylation of organic compounds. It was shown that it can promote aggregation and stabilization of AMP in the presence of Na+, aggregation of nucleotides could promote polymerization above 75 °C in the absence of Na+. It is possible that polymerization promoted by AcP could occur at mineral surfaces. It was shown that ADP can only be phosphorylated to ATP by AcP and other nucleoside triphosphates were not phosphorylated by AcP. This might explain why all lifeforms use ATP to drive biochemical reactions. ATP analogues Biochemistry laboratories often use in vitro studies to explore ATP-dependent molecular processes. ATP analogs are also used in X-ray crystallography to determine a protein structure in complex with ATP, often together with other substrates. Enzyme inhibitors of ATP-dependent enzymes such as kinases are needed to examine the binding sites and transition states involved in ATP-dependent reactions. Most useful ATP analogs cannot be hydrolyzed as ATP would be; instead, they trap the enzyme in a structure closely related to the ATP-bound state. Adenosine 5′-(γ-thiotriphosphate) is an extremely common ATP analog in which one of the gamma-phosphate oxygens is replaced by a sulfur atom; this anion is hydrolyzed at a dramatically slower rate than ATP itself and functions as an inhibitor of ATP-dependent processes. In crystallographic studies, hydrolysis transition states are modeled by the bound vanadate ion. Caution is warranted in interpreting the results of experiments using ATP analogs, since some enzymes can hydrolyze them at appreciable rates at high concentration. Medical use ATP is used intravenously for some heart related conditions. History ATP was discovered in 1929 by and Jendrassik and, independently, by Cyrus Fiske and Yellapragada Subba Rao of Harvard Medical School, both teams competing against each other to find an assay for phosphorus. It was proposed to be the intermediary between energy-yielding and energy-requiring reactions in cells by Fritz Albert Lipmann in 1941. It was first synthesized in the laboratory by Alexander Todd in 1948, and he was awarded the Nobel Prize in Chemistry in 1957 partly for this work. The 1978 Nobel Prize in Chemistry was awarded to Peter Dennis Mitchell for the discovery of the chemiosmotic mechanism of ATP synthesis. The 1997 Nobel Prize in Chemistry was divided, one half jointly to Paul D. Boyer and John E. Walker "for their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)" and the other half to Jens C. Skou "for the first discovery of an ion-transporting enzyme, Na+, K+ -ATPase." See also Adenosine-tetraphosphatase Adenosine methylene triphosphate ATPases ATP test Creatine Cyclic adenosine monophosphate (cAMP) Nucleotide exchange factor Phosphagen References External links ATP bound to proteins in the PDB ScienceAid: Energy ATP and Exercise PubChem entry for Adenosine Triphosphate KEGG entry for Adenosine Triphosphate Adenosine receptor agonists Cellular respiration Coenzymes Ergogenic aids Exercise physiology Neurotransmitters Nucleotides Phosphate esters Purinergic signalling Purines Substances discovered in the 1920s
Adenosine triphosphate
[ "Chemistry", "Biology" ]
5,172
[ "Cellular respiration", "Coenzymes", "Neurotransmitters", "Organic compounds", "Biochemistry", "Neurochemistry", "Metabolism" ]
1,805
https://en.wikipedia.org/wiki/Antibiotic
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the ones which cause the common cold or influenza. Drugs which inhibit growth of viruses are termed antiviral drugs or antivirals. Antibiotics are also not effective against fungi. Drugs which inhibit growth of fungi are called antifungal drugs. Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas non-antibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same effect of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include bactericides, bacteriostatics, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed. Antibiotics have been used since ancient times. Many civilizations used topical application of moldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany. However, the effectiveness and easy access to antibiotics have also led to their overuse and some bacteria have evolved resistance to them. Antimicrobial resistance (AMR), a naturally occurring process, is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. The World Health Organization has classified AMR as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country". Each year, nearly 5 million deaths are associated with AMR globally. Global deaths attributable to AMR numbered 1.27 million in 2019. Etymology The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs. Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis. These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1947. The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution. This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not. The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively", which comes from βίωσις (biōsis), "way of life", and that from βίος (bios), "life". The term "antibacterial" derives from Greek ἀντί (anti), "against" + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane", because the first bacteria to be discovered were rod-shaped. Usage Medical uses Antibiotics are used to treat or prevent bacterial infections, and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted. This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days. When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance. To avoid surgery, antibiotics may be given for non-complicated acute appendicitis. Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery. Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related. The use of antibiotics for secondary prevention of coronary heart disease is not supported by current scientific evidence, and may actually increase cardiovascular mortality, all-cause mortality and the occurrence of stroke. Routes of administration There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose. Global consumption Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed. Side effects Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient. Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions. Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis. Common side effects of oral antibiotics include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridioides difficile. Taking probiotics during the course of antibiotic treatment can help prevent antibiotic-associated diarrhea. Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area. Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid. Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones. They are also known to affect chloroplasts. Interactions Birth control pills There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure. The majority of studies indicate antibiotics do not interfere with birth control pills, such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%). Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood. Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended. In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients. Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial. Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives. More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception. Alcohol Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics with which alcohol consumption may cause serious side effects. Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered. Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath. In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption. Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound. Pharmacodynamics The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial. The bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells. These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection. Since the activity of antibacterials depends frequently on its concentration, in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial. To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy. Combination therapy In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect. Fosfomycin has the highest number of synergistic combinations among antibiotics and is almost always used as a partner drug. Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin. Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if one of the antibiotics was given as a monotherapy. For example, chloramphenicol and tetracyclines are antagonists to penicillins. However, this can vary depending on the species of bacteria. In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic. In addition to combining one antibiotic with another, antibiotics are sometimes co-administered with resistance-modifying agents. For example, β-lactam antibiotics may be used in combination with β-lactamase inhibitors, such as clavulanic acid or sulbactam, when a patient is infected with a β-lactamase-producing strain of bacteria. Classes Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities, killing the bacteria. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic, inhibiting further growth (with the exception of bactericidal aminoglycosides). Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering classes of antibacterial compounds, four new classes of antibiotics were introduced to clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin). Production With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds. These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis. Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons. Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions. Resistance Antimicrobial resistance (AMR or AR) is a naturally occurring process. AMR is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. The emergence of antibiotic-resistant bacteria is a common phenomenon mainly caused by the overuse/misuse. It represents a threat to health globally. Each year, nearly 5 million deaths are associated with AMR globally. Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug. For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment. Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains. Resistance may take the form of biodegradation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces. The survival of bacteria often results from an inheritable resistance, but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use. Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria. Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms. Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability. Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains. For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA. Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains. The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange. For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes. Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials. Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound. Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were, for a while, well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials. The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections." On 26 May 2016, an E. coli "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic. In recent years, even anaerobic bacteria, historically considered less concerning in terms of resistance, have demonstrated high rates of antibiotic resistance, particularly Bacteroides, for which resistance rates to penicillin have been reported to exceed 90%. Misuse Per The ICU Book, "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them." Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. However, potential harm from antibiotics extends beyond selection of antimicrobial resistance and their overuse is associated with adverse effects for patients themselves, seen most clearly in critically ill patients in Intensive care units. Self-prescribing of antibiotics is an example of misuse. Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections. The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s. Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics. Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them". Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics. The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered one of the drivers of antibiotic misuse. Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics. The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies. A non-governmental organization campaign group is Keep Antibiotics Working. In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children. The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003. Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production. However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742 and H.R. 2562) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed. These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association. Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year. There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations. Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse. Other forms of antibiotic-associated harm include anaphylaxis, drug toxicity most notably kidney and liver damage, and super-infections with resistant organisms. Antibiotics are also known to affect mitochondrial function, and this may contribute to the bioenergetic failure of immune cells seen in sepsis. They also alter the microbiome of the gut, lungs, and skin, which may be associated with adverse effects such as Clostridioides difficile associated diarrhoea. Whilst antibiotics can clearly be lifesaving in patients with bacterial infections, their overuse, especially in patients where infections are hard to diagnose, can lead to harm via multiple mechanisms. History Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago. Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections. Nubian mummies studied in the 1990s were found to contain significant levels of tetracycline. The beer brewed at that time was conjectured to have been the source. The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes. Various Essential oils have been shown to have anti-microbial properties. Along with this, the plants from which these oils have been derived can be used as niche anti-microbial agents. Synthetic antibiotics derived from dyes Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Ehrlich noted certain dyes would colour human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan, now called arsphenamine. This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907. Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Ehrlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910, Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden. The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine. The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology. Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913. The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany, for which Domagk received the 1939 Nobel Prize in Physiology or Medicine. Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years. Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials. Penicillin and other natural antibiotics Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics". In 1874, physician Sir William Roberts noted that cultures of the mould Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination. In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mold. In 1897, doctoral student Ernest Duchesne submitted a dissertation, "" (Contribution to the study of vital competition in micro-organisms: antagonism between moulds and microbes), the first known scholarly work to consider the therapeutic capabilities of moulds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and moulds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Duchesne's army service after getting his degree prevented him from doing any further research. Duchesne died of tuberculosis, a disease now treated by antibiotics. In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain moulds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium rubens, in one of his culture plates. He observed that the presence of the mould killed or prevented the growth of the bacteria. Fleming postulated that the mould must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterised some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists. Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942 and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety. For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming. Florey credited René Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin. In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II. Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War. Late 20th century During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003. Antibiotic pipeline Both the WHO and the Infectious Disease Society of America report that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance. The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli. According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1–3 clinical trials as of May 2017. Antibiotics targeting multidrug-resistant Gram-positive pathogens remains a high priority. A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin have been approved for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia. The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow-spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis. New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection. Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor. In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals. According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible." Replenishing the antibiotic pipeline and developing other new therapies Because antibiotic-resistant bacterial strains continue to emerge and spread, there is a constant need to develop new antibacterial treatments. Current strategies include traditional chemistry-based approaches such as natural product-based drug discovery, newer chemistry-based approaches such as drug design, traditional biology-based approaches such as immunoglobulin therapy, and experimental biology-based approaches such as phage therapy, fecal microbiota transplants, antisense RNA-based treatments, and CRISPR-Cas9-based treatments. Natural product-based antibiotic discovery Most of the antibiotics in current use are natural products or natural product derivatives, and bacterial, fungal, plant and animal extracts are being screened in the search for new antibiotics. Organisms may be selected for testing based on ecological, ethnomedical, genomic, or historical rationales. Medicinal plants, for example, are screened on the basis that they are used by traditional healers to prevent or cure infection and may therefore contain antibacterial compounds. Also, soil bacteria are screened on the basis that, historically, they have been a very rich source of antibiotics (with 70 to 80% of antibiotics in current use derived from the actinomycetes). In addition to screening natural products for direct antibacterial activity, they are sometimes screened for the ability to suppress antibiotic resistance and antibiotic tolerance. For example, some secondary metabolites inhibit drug efflux pumps, thereby increasing the concentration of antibiotic able to reach its cellular target and decreasing bacterial resistance to the antibiotic. Natural products known to inhibit bacterial efflux pumps include the alkaloid lysergol, the carotenoids capsanthin and capsorubin, and the flavonoids rotenone and chrysin. Other natural products, this time primary metabolites rather than secondary metabolites, have been shown to eradicate antibiotic tolerance. For example, glucose, mannitol, and fructose reduce antibiotic tolerance in Escherichia coli and Staphylococcus aureus, rendering them more susceptible to killing by aminoglycoside antibiotics. Natural products may be screened for the ability to suppress bacterial virulence factors too. Virulence factors are molecules, cellular structures and regulatory systems that enable bacteria to evade the body's immune defenses (e.g. urease, staphyloxanthin), move towards, attach to, and/or invade human cells (e.g. type IV pili, adhesins, internalins), coordinate the activation of virulence genes (e.g. quorum sensing), and cause disease (e.g. exotoxins). Examples of natural products with antivirulence activity include the flavonoid epigallocatechin gallate (which inhibits listeriolysin O), the quinone tetrangomycin (which inhibits staphyloxanthin), and the sesquiterpene zerumbone (which inhibits Acinetobacter baumannii motility). Immunoglobulin therapy Antibodies (anti-tetanus immunoglobulin) have been used in the treatment and prevention of tetanus since the 1910s, and this approach continues to be a useful way of controlling bacterial diseases. The monoclonal antibody bezlotoxumab, for example, has been approved by the US FDA and EMA for recurrent Clostridioides difficile infection, and other monoclonal antibodies are in development (e.g. AR-301 for the adjunctive treatment of S. aureus ventilator-associated pneumonia). Antibody treatments act by binding to and neutralizing bacterial exotoxins and other virulence factors. Phage therapy Phage therapy is under investigation as a method of treating antibiotic-resistant strains of bacteria. Phage therapy involves infecting bacterial pathogens with viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism's intestinal microbiota. Bacteriophages, also known as phages, infect and kill bacteria primarily during lytic cycles. Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain. The high specificity of phage protects "good" bacteria from destruction. Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails. There are considerable regulatory hurdles that must be cleared for such therapies. Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option. Fecal microbiota transplants Fecal microbiota transplants involve transferring the full intestinal microbiota from a healthy human donor (in the form of stool) to patients with C. difficile infection. Although this procedure has not been officially approved by the US FDA, its use is permitted under some conditions in patients with antibiotic-resistant C. difficile infection. Cure rates are around 90%, and work is underway to develop stool banks, standardized products, and methods of oral delivery. Fecal microbiota transplantation has also been used more recently for inflammatory bowel diseases. Antisense RNA-based treatments Antisense RNA-based treatment (also known as gene silencing therapy) involves (a) identifying bacterial genes that encode essential proteins (e.g. the Pseudomonas aeruginosa genes acpP, lpxC, and rpsJ), (b) synthesizing single-stranded RNA that is complementary to the mRNA encoding these essential proteins, and (c) delivering the single-stranded RNA to the infection site using cell-penetrating peptides or liposomes. The antisense RNA then hybridizes with the bacterial mRNA and blocks its translation into the essential protein. Antisense RNA-based treatment has been shown to be effective in in vivo models of P. aeruginosa pneumonia. In addition to silencing essential bacterial genes, antisense RNA can be used to silence bacterial genes responsible for antibiotic resistance. For example, antisense RNA has been developed that silences the S. aureus mecA gene (the gene that encodes modified penicillin-binding protein 2a and renders S. aureus strains methicillin-resistant). Antisense RNA targeting mecA mRNA has been shown to restore the susceptibility of methicillin-resistant staphylococci to oxacillin in both in vitro and in vivo studies. CRISPR-Cas9-based treatments In the early 2000s, a system was discovered that enables bacteria to defend themselves against invading viruses. The system, known as CRISPR-Cas9, consists of (a) an enzyme that destroys DNA (the nuclease Cas9) and (b) the DNA sequences of previously encountered viral invaders (CRISPR). These viral DNA sequences enable the nuclease to target foreign (viral) rather than self (bacterial) DNA. Although the function of CRISPR-Cas9 in nature is to protect bacteria, the DNA sequences in the CRISPR component of the system can be modified so that the Cas9 nuclease targets bacterial resistance genes or bacterial virulence genes instead of viral genes. The modified CRISPR-Cas9 system can then be administered to bacterial pathogens using plasmids or bacteriophages. This approach has successfully been used to silence antibiotic resistance and reduce the virulence of enterohemorrhagic E. coli in an in vivo model of infection. Reducing the selection pressure for antibiotic resistance In addition to developing new antibacterial treatments, it is important to reduce the selection pressure for the emergence and spread of antimicrobial resistance (AMR), such as antibiotic resistance. Strategies to accomplish this include well-established infection control measures such as infrastructure improvement (e.g. less crowded housing), better sanitation (e.g. safe drinking water and food), better use of vaccines and vaccine development, other approaches such as antibiotic stewardship, and experimental approaches such as the use of prebiotics and probiotics to prevent infection. Antibiotic cycling, where antibiotics are alternated by clinicians to treat microbial diseases, is proposed, but recent studies revealed such strategies are ineffective against antibiotic resistance. Vaccines Vaccines are an essential part of the response to reduce AMR as they prevent infections, reduce the use and overuse of antimicrobials, and slow the emergence and spread of drug-resistant pathogens. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases. See also References Further reading External links Anti-infective agents .
Antibiotic
[ "Chemistry", "Biology" ]
9,794
[ "Biotechnology products", "Anti-infective agents", "Antibiotics", "Bactericides", "Chemicals in medicine", "Biocides" ]
1,910
https://en.wikipedia.org/wiki/Agarose%20gel%20electrophoresis
Agarose gel electrophoresis is a method of gel electrophoresis used in biochemistry, molecular biology, genetics, and clinical chemistry to separate a mixed population of macromolecules such as DNA or proteins in a matrix of agarose, one of the two main components of agar. The proteins may be separated by charge and/or size (isoelectric focusing agarose electrophoresis is essentially size independent), and the DNA and RNA fragments by length. Biomolecules are separated by applying an electric field to move the charged molecules through an agarose matrix, and the biomolecules are separated by size in the agarose gel matrix. Agarose gel is easy to cast, has relatively fewer charged groups, and is particularly suitable for separating DNA of size range most often encountered in laboratories, which accounts for the popularity of its use. The separated DNA may be viewed with stain, most commonly under UV light, and the DNA fragments can be extracted from the gel with relative ease. Most agarose gels used are between 0.7–2% dissolved in a suitable electrophoresis buffer. Properties of agarose gel Agarose gel is a three-dimensional matrix formed of helical agarose molecules in supercoiled bundles that are aggregated into three-dimensional structures with channels and pores through which biomolecules can pass. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state. The melting temperature is different from the gelling temperature, depending on the sources, agarose gel has a gelling temperature of and a melting temperature of . Low-melting and low-gelling agaroses made through chemical modifications are also available. Agarose gel has large pore size and good gel strength, making it suitable as an anticonvection medium for the electrophoresis of DNA and large protein molecules. The pore size of a 1% gel has been estimated from 100 nm to 200–500 nm, and its gel strength allows gels as dilute as 0.15% to form a slab for gel electrophoresis. Low-concentration gels (0.1–0.2%) however are fragile and therefore hard to handle. Agarose gel has lower resolving power than polyacrylamide gel for DNA but has a greater range of separation, and is therefore used for DNA fragments of usually 50–20,000 bp in size. The limit of resolution for standard agarose gel electrophoresis is around 750 kb, but resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). It can also be used to separate large proteins, and it is the preferred matrix for the gel electrophoresis of particles with effective radii larger than 5–10 nm. A 0.9% agarose gel has pores large enough for the entry of bacteriophage T4. The agarose polymer contains charged groups, in particular pyruvate and sulfate. These negatively charged groups create a flow of water in the opposite direction to the movement of DNA in a process called electroendosmosis (EEO), and can therefore retard the movement of DNA and cause blurring of bands. Higher concentration gels would have higher electroendosmotic flow. Low EEO agarose is therefore generally preferred for use in agarose gel electrophoresis of nucleic acids, but high EEO agarose may be used for other purposes. The lower sulfate content of low EEO agarose, particularly low-melting point (LMP) agarose, is also beneficial in cases where the DNA extracted from gel is to be used for further manipulation as the presence of contaminating sulfates may affect some subsequent procedures, such as ligation and PCR. Zero EEO agaroses however are undesirable for some applications as they may be made by adding positively charged groups and such groups can affect subsequent enzyme reactions. Electroendosmosis is a reason agarose is used in preference to agar as the agaropectin component in agar contains a significant amount of negatively charged sulfate and carboxyl groups. The removal of agaropectin in agarose substantially reduces the EEO, as well as reducing the non-specific adsorption of biomolecules to the gel matrix. However, for some applications such as the electrophoresis of serum proteins, a high EEO may be desirable, and agaropectin may be added in the gel used. Migration of nucleic acids in agarose gel Factors affecting migration of nucleic acid in gel A number of factors can affect the migration of nucleic acids: the dimension of the gel pores (gel concentration), size of DNA being electrophoresed, the voltage used, the ionic strength of the buffer, and the concentration of intercalating dye such as ethidium bromide if used during electrophoresis. Smaller molecules travel faster than larger molecules in gel, and double-stranded DNA moves at a rate that is inversely proportional to the logarithm of the number of base pairs. This relationship however breaks down with very large DNA fragments, and separation of very large DNA fragments requires the use of pulsed field gel electrophoresis (PFGE), which applies alternating current from different directions and the large DNA fragments are separated as they reorient themselves with the changing field. For standard agarose gel electrophoresis, larger molecules are resolved better using a low concentration gel while smaller molecules separate better at high concentration gel. Higher concentration gels, however, require longer run times (sometimes days). The movement of the DNA may be affected by the conformation of the DNA molecule, for example, supercoiled DNA usually moves faster than relaxed DNA because it is tightly coiled and hence more compact. In a normal plasmid DNA preparation, multiple forms of DNA may be present. Gel electrophoresis of the plasmids would normally show the negatively supercoiled form as the main band, while nicked DNA (open circular form) and the relaxed closed circular form appears as minor bands. The rate at which the various forms move however can change using different electrophoresis conditions, and the mobility of larger circular DNA may be more strongly affected than linear DNA by the pore size of the gel. Ethidium bromide which intercalates into circular DNA can change the charge, length, as well as the superhelicity of the DNA molecule, therefore its presence in gel during electrophoresis can affect its movement. For example, the positive charge of ethidium bromide can reduce the DNA movement by 15%. Agarose gel electrophoresis can be used to resolve circular DNA with different supercoiling topology. DNA damage due to increased cross-linking will also reduce electrophoretic DNA migration in a dose-dependent way. The rate of migration of the DNA is proportional to the voltage applied, i.e. the higher the voltage, the faster the DNA moves. The resolution of large DNA fragments however is lower at high voltage. The mobility of DNA may also change in an unsteady field – in a field that is periodically reversed, the mobility of DNA of a particular size may drop significantly at a particular cycling frequency. This phenomenon can result in band inversion in field inversion gel electrophoresis (FIGE), whereby larger DNA fragments move faster than smaller ones. Migration anomalies "Smiley" gels - this edge effect is caused when the voltage applied is too high for the gel concentration used. Overloading of DNA - overloading of DNA slows down the migration of DNA fragments. Contamination - presence of impurities, such as salts or proteins can affect the movement of the DNA. Mechanism of migration and separation The negative charge of its phosphate backbone moves the DNA towards the positively charged anode during electrophoresis. However, the migration of DNA molecules in solution, in the absence of a gel matrix, is independent of molecular weight during electrophoresis. The gel matrix is therefore responsible for the separation of DNA by size during electrophoresis, and a number of models exist to explain the mechanism of separation of biomolecules in gel matrix. A widely accepted one is the Ogston model which treats the polymer matrix as a sieve. A globular protein or a random coil DNA moves through the interconnected pores, and the movement of larger molecules is more likely to be impeded and slowed down by collisions with the gel matrix, and the molecules of different sizes can therefore be separated in this sieving process. The Ogston model however breaks down for large molecules whereby the pores are significantly smaller than size of the molecule. For DNA molecules of size greater than 1 kb, a reptation model (or its variants) is most commonly used. This model assumes that the DNA can crawl in a "snake-like" fashion (hence "reptation") through the pores as an elongated molecule. A biased reptation model applies at higher electric field strength, whereby the leading end of the molecule become strongly biased in the forward direction and pulls the rest of the molecule along. Real-time fluorescence microscopy of stained molecules, however, showed more subtle dynamics during electrophoresis, with the DNA showing considerable elasticity as it alternately stretching in the direction of the applied field and then contracting into a ball, or becoming hooked into a U-shape when it gets caught on the polymer fibres. General procedure The details of an agarose gel electrophoresis experiment may vary depending on methods, but most follow a general procedure. Casting of gel The gel is prepared by dissolving the agarose powder in an appropriate buffer, such as TAE or TBE, to be used in electrophoresis. The agarose is dispersed in the buffer before heating it to near-boiling point, but avoid boiling. The melted agarose is allowed to cool sufficiently before pouring the solution into a cast as the cast may warp or crack if the agarose solution is too hot. A comb is placed in the cast to create wells for loading sample, and the gel should be completely set before use. The concentration of gel affects the resolution of DNA separation. The agarose gel is composed of microscopic pores through which the molecules travel, and there is an inverse relationship between the pore size of the agarose gel and the concentration – pore size decreases as the density of agarose fibers increases. High gel concentration improves separation of smaller DNA molecules, while lowering gel concentration permits large DNA molecules to be separated. The process allows fragments ranging from 50 base pairs to several mega bases to be separated depending on the gel concentration used. The concentration is measured in weight of agarose over volume of buffer used (g/ml). For a standard agarose gel electrophoresis, a 0.8% gel gives good separation or resolution of large 5–10kb DNA fragments, while 2% gel gives good resolution for small 0.2–1kb fragments. 1% gels is often used for a standard electrophoresis. High percentage gels are often brittle and may not set evenly, while low percentage gels (0.1-0.2%) are fragile and not easy to handle. Low-melting-point (LMP) agarose gels are also more fragile than normal agarose gel. Low-melting point agarose may be used on its own or simultaneously with standard agarose for the separation and isolation of DNA. PFGE and FIGE are often done with high percentage agarose gels. Loading of samples Once the gel has set, the comb is removed, leaving wells where DNA samples can be loaded. Loading buffer is mixed with the DNA sample before the mixture is loaded into the wells. The loading buffer contains a dense compound, which may be glycerol, sucrose, or Ficoll, that raises the density of the sample so that the DNA sample may sink to the bottom of the well. If the DNA sample contains residual ethanol after its preparation, it may float out of the well. The loading buffer also includes colored dyes such as xylene cyanol and bromophenol blue used to monitor the progress of the electrophoresis. The DNA samples are loaded using a pipette. Electrophoresis Agarose gel electrophoresis is most commonly done horizontally in a subaquaeous mode whereby the slab gel is completely submerged in buffer during electrophoresis. It is also possible, but less common, to perform the electrophoresis vertically, as well as horizontally with the gel raised on agarose legs using an appropriate apparatus. The buffer used in the gel is the same as the running buffer in the electrophoresis tank, which is why electrophoresis in the subaquaeous mode is possible with agarose gel. For optimal resolution of DNA greater than 2kb in size in standard gel electrophoresis, 5 to 8 V/cm is recommended (the distance in cm refers to the distance between electrodes, therefore this recommended voltage would be 5 to 8 multiplied by the distance between the electrodes in cm). Voltage may also be limited by the fact that it heats the gel and may cause the gel to melt if it is run at high voltage for a prolonged period, especially if the gel used is LMP agarose gel. Too high a voltage may also reduce resolution, as well as causing band streaking for large DNA molecules. Too low a voltage may lead to broadening of band for small DNA fragments due to dispersion and diffusion. Since DNA is not visible in natural light, the progress of the electrophoresis is monitored using colored dyes. Xylene cyanol (light blue color) comigrates large DNA fragments, while Bromophenol blue (dark blue) comigrates with the smaller fragments. Less commonly used dyes include Cresol Red and Orange G which migrate ahead of bromophenol blue. A DNA marker is also run together for the estimation of the molecular weight of the DNA fragments. Note however that the size of a circular DNA like plasmids cannot be accurately gauged using standard markers unless it has been linearized by restriction digest, alternatively a supercoiled DNA marker may be used. Staining and visualization DNA as well as RNA are normally visualized by staining with ethidium bromide, which intercalates into the major grooves of the DNA and fluoresces under UV light. The intercalation depends on the concentration of DNA and thus, a band with high intensity will indicate a higher amount of DNA compared to a band of less intensity. The ethidium bromide may be added to the agarose solution before it gels, or the DNA gel may be stained later after electrophoresis. Destaining of the gel is not necessary but may produce better images. Other methods of staining are available; examples are MIDORI Green, SYBR Green, GelRed, methylene blue, brilliant cresyl blue, Nile blue sulfate, and crystal violet. SYBR Green, GelRed and other similar commercial products are sold as safer alternatives to ethidium bromide as it has been shown to be mutagenic in Ames test, although the carcinogenicity of ethidium bromide has not actually been established. SYBR Green requires the use of a blue-light transilluminator. DNA stained with crystal violet can be viewed under natural light without the use of a UV transilluminator which is an advantage, however it may not produce a strong band. When stained with ethidium bromide, the gel is viewed with an ultraviolet (UV) transilluminator. The UV light excites the electrons within the aromatic ring of ethidium bromide, and once they return to the ground state, light is released, making the DNA and ethidium bromide complex fluoresce. Standard transilluminators use wavelengths of 302/312-nm (UV-B), however exposure of DNA to UV radiation for as little as 45 seconds can produce damage to DNA and affect subsequent procedures, for example reducing the efficiency of transformation, in vitro transcription, and PCR. Exposure of DNA to UV radiation therefore should be limited. Using a higher wavelength of 365 nm (UV-A range) causes less damage to the DNA but also produces much weaker fluorescence with ethidium bromide. Where multiple wavelengths can be selected in the transilluminator, shorter wavelength can be used to capture images, while longer wavelength should be used if it is necessary to work on the gel for any extended period of time. The transilluminator apparatus may also contain image capture devices, such as a digital or polaroid camera, that allow an image of the gel to be taken or printed. For gel electrophoresis of protein, the bands may be visualised with Coomassie or silver stains. Downstream procedures The separated DNA bands are often used for further procedures, and a DNA band may be cut out of the gel as a slice, dissolved and purified. Contaminants however may affect some downstream procedures such as PCR, and low melting point agarose may be preferred in some cases as it contains fewer of the sulfates that can affect some enzymatic reactions. The gels may also be used for blotting techniques. Buffers In general, the ideal buffer should have good conductivity, produce less heat and have a long life. There are a number of buffers used for agarose electrophoresis; common ones for nucleic acids include tris/acetate/EDTA (TAE) and tris/borate/EDTA (TBE). The buffers used contain EDTA to inactivate many nucleases which require divalent cation for their function. The borate in TBE buffer can be problematic as borate can polymerize, and/or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity, but it provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product. Many other buffers have been proposed, e.g. lithium borate (LB), iso electric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) and or matched ion mobilities, which leads to longer buffer life. Tris-phosphate buffer has high buffering capacity but cannot be used if DNA extracted is to be used in phosphate sensitive reaction. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM lithium borate). Other buffering system may be used in specific applications, for example, barbituric acid-sodium barbiturate or tris-barbiturate buffers may be used for in agarose gel electrophoresis of proteins, for example in the detection of abnormal distribution of proteins. Applications Estimation of the size of DNA molecules following digestion with restriction enzymes, e.g., in restriction mapping of cloned DNA. Estimation of the DNA concentration by comparing the intensity of the nucleic acid band with the corresponding band of the size marker. Analysis of products of a polymerase chain reaction (PCR), e.g., in molecular genetic diagnosis or genetic fingerprinting Separation of DNA fragments for extraction and purification. Separation of restricted genomic DNA prior to Southern transfer, or of RNA prior to Northern transfer. Separation of proteins, for example, screening of protein abnormalities in clinical chemistry. Agarose gels are easily cast and handled compared to other matrices and nucleic acids are not chemically altered during electrophoresis. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator. Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons. See also Gel electrophoresis Immunodiffusion, Immunoelectrophoresis SDD-AGE Northern blot SDS-polyacrylamide gel electrophoresis Southern blot References External links How to run a DNA or RNA gel Animation of gel analysis of DNA restriction fragments Video and article of agarose gel electrophoresis Step by step photos of running a gel and extracting DNA Drinking straw electrophoresis! A typical method from wikiversity Building a gel electrophoresis chamber Biological techniques and tools Molecular biology Electrophoresis Polymerase chain reaction Articles containing video clips
Agarose gel electrophoresis
[ "Chemistry", "Biology" ]
4,409
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "Instrumental analysis", "Biochemical separation processes", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry", "Electrophoresis" ]
1,915
https://en.wikipedia.org/wiki/Antigen
In immunology, an antigen (Ag) is a molecule, moiety, foreign particulate matter, or an allergen, such as pollen, that can bind to a specific antibody or T-cell receptor. The presence of antigens in the body may trigger an immune response. Antigens can be proteins, peptides (amino acid chains), polysaccharides (chains of simple sugars), lipids, or nucleic acids. Antigens exist on normal cells, cancer cells, parasites, viruses, fungi, and bacteria. Antigens are recognized by antigen receptors, including antibodies and T-cell receptors. Diverse antigen receptors are made by cells of the immune system so that each cell has a specificity for a single antigen. Upon exposure to an antigen, only the lymphocytes that recognize that antigen are activated and expanded, a process known as clonal selection. In most cases, antibodies are antigen-specific, meaning that an antibody can only react to and bind one specific antigen; in some instances, however, antibodies may cross-react to bind more than one antigen. The reaction between an antigen and an antibody is called the antigen-antibody reaction. Antigen can originate either from within the body ("self-protein" or "self antigens") or from the external environment ("non-self"). The immune system identifies and attacks "non-self" external antigens. Antibodies usually do not react with self-antigens due to negative selection of T cells in the thymus and B cells in the bone marrow. The diseases in which antibodies react with self antigens and damage the body's own cells are called autoimmune diseases. Vaccines are examples of antigens in an immunogenic form, which are intentionally administered to a recipient to induce the memory function of the adaptive immune system towards antigens of the pathogen invading that recipient. The vaccine for seasonal influenza is a common example. Etymology Paul Ehrlich coined the term antibody () in his side-chain theory at the end of the 19th century. In 1899, Ladislas Deutsch (László Detre) named the hypothetical substances halfway between bacterial constituents and antibodies "antigenic or immunogenic substances" (). He originally believed those substances to be precursors of antibodies, just as a zymogen is a precursor of an enzyme. But, by 1903, he understood that an antigen induces the production of immune bodies (antibodies) and wrote that the word antigen is a contraction of antisomatogen (). The Oxford English Dictionary indicates that the logical construction should be "anti(body)-gen". The term originally referred to a substance that acts as an antibody generator. Terminology Epitope – the distinct surface features of an antigen, its antigenic determinant.Antigenic molecules, normally "large" biological polymers, usually present surface features that can act as points of interaction for specific antibodies. Any such feature constitutes an epitope. Most antigens have the potential to be bound by multiple antibodies, each of which is specific to one of the antigen's epitopes. Using the "lock and key" metaphor, the antigen can be seen as a string of keys (epitopes) each of which matches a different lock (antibody). Different antibody idiotypes, each have distinctly formed complementarity-determining regions. Allergen – A substance capable of causing an allergic reaction. The (detrimental) reaction may result after exposure via ingestion, inhalation, injection, or contact with skin. Superantigen – A class of antigens that cause non-specific activation of T-cells, resulting in polyclonal T-cell activation and massive cytokine release. Tolerogen – A substance that invokes a specific immune non-responsiveness due to its molecular form. If its molecular form is changed, a tolerogen can become an immunogen. Immunoglobulin-binding protein – Proteins such as protein A, protein G, and protein L that are capable of binding to antibodies at positions outside of the antigen-binding site. While antigens are the "target" of antibodies, immunoglobulin-binding proteins "attack" antibodies. T-dependent antigen – Antigens that require the assistance of T cells to induce the formation of specific antibodies. T-independent antigen – Antigens that stimulate B cells directly. Immunodominant antigens – Antigens that dominate (over all others from a pathogen) in their ability to produce an immune response. T cell responses typically are directed against a relatively few immunodominant epitopes, although in some cases (e.g., infection with the malaria pathogen Plasmodium spp.) it is dispersed over a relatively large number of parasite antigens. Antigen-presenting cells present antigens in the form of peptides on histocompatibility molecules. The T cells selectively recognize the antigens; depending on the antigen and the type of the histocompatibility molecule, different types of T cells will be activated. For T-cell receptor (TCR) recognition, the peptide must be processed into small fragments inside the cell and presented by a major histocompatibility complex (MHC). The antigen cannot elicit the immune response without the help of an immunologic adjuvant. Similarly, the adjuvant component of vaccines plays an essential role in the activation of the innate immune system. An immunogen is an antigen substance (or adduct) that is able to trigger a humoral (innate) or cell-mediated immune response. It first initiates an innate immune response, which then causes the activation of the adaptive immune response. An antigen binds the highly variable immunoreceptor products (B-cell receptor or T-cell receptor) once these have been generated. Immunogens are those antigens, termed immunogenic, capable of inducing an immune response. At the molecular level, an antigen can be characterized by its ability to bind to an antibody's paratopes. Different antibodies have the potential to discriminate among specific epitopes present on the antigen surface. A hapten is a small molecule that can only induce an immune response when attached to a larger carrier molecule, such as a protein. Antigens can be proteins, polysaccharides, lipids, nucleic acids or other biomolecules. This includes parts (coats, capsules, cell walls, flagella, fimbriae, and toxins) of bacteria, viruses, and other microorganisms. Non-microbial non-self antigens can include pollen, egg white, and proteins from transplanted tissues and organs or on the surface of transfused blood cells. Sources Antigens can be classified according to their source. Exogenous antigens Exogenous antigens are antigens that have entered the body from the outside, for example, by inhalation, ingestion or injection. The immune system's response to exogenous antigens is often subclinical. By endocytosis or phagocytosis, exogenous antigens are taken into the antigen-presenting cells (APCs) and processed into fragments. APCs then present the fragments to T helper cells (CD4+) by the use of class II histocompatibility molecules on their surface. Some T cells are specific for the peptide:MHC complex. They become activated and start to secrete cytokines, substances that activate cytotoxic T lymphocytes (CTL), antibody-secreting B cells, macrophages and other particles. Some antigens start out as exogenous and later become endogenous (for example, intracellular viruses). Intracellular antigens can be returned to circulation upon the destruction of the infected cell. Endogenous antigens Endogenous antigens are generated within normal cells as a result of normal cell metabolism, or because of viral or intracellular bacterial infection. The fragments are then presented on the cell surface in the complex with MHC class I molecules. If activated cytotoxic CD8+ T cells recognize them, the T cells secrete various toxins that cause the lysis or apoptosis of the infected cell. In order to keep the cytotoxic cells from killing cells just for presenting self-proteins, the cytotoxic cells (self-reactive T cells) are deleted as a result of tolerance (negative selection). Endogenous antigens include xenogenic (heterologous), autologous and idiotypic or allogenic (homologous) antigens. Sometimes antigens are part of the host itself in an autoimmune disease. Autoantigens An autoantigen is usually a self-protein or protein complex (and sometimes DNA or RNA) that is recognized by the immune system of patients with a specific autoimmune disease. Under normal conditions, these self-proteins should not be the target of the immune system, but in autoimmune diseases, their associated T cells are not deleted and instead attack. Neoantigens Neoantigens are those that are entirely absent from the normal human genome. As compared with nonmutated self-proteins, neoantigens are of relevance to tumor control, as the quality of the T cell pool that is available for these antigens is not affected by central T cell tolerance. Technology to systematically analyze T cell reactivity against neoantigens became available only recently. Neoantigens can be directly detected and quantified. Viral antigens For virus-associated tumors, such as cervical cancer and a subset of head and neck cancers, epitopes derived from viral open reading frames contribute to the pool of neoantigens. Tumor antigens Tumor antigens are those antigens that are presented by MHC class I or MHC class II molecules on the surface of tumor cells. Antigens found only on such cells are called tumor-specific antigens (TSAs) and generally result from a tumor-specific mutation. More common are antigens that are presented by tumor cells and normal cells, called tumor-associated antigens (TAAs). Cytotoxic T lymphocytes that recognize these antigens may be able to destroy tumor cells. Tumor antigens can appear on the surface of the tumor in the form of, for example, a mutated receptor, in which case they are recognized by B cells. For human tumors without a viral etiology, novel peptides (neo-epitopes) are created by tumor-specific DNA alterations. Process A large fraction of human tumor mutations are effectively patient-specific. Therefore, neoantigens may also be based on individual tumor genomes. Deep-sequencing technologies can identify mutations within the protein-coding part of the genome (the exome) and predict potential neoantigens. In mice models, for all novel protein sequences, potential MHC-binding peptides were predicted. The resulting set of potential neoantigens was used to assess T cell reactivity. Exome–based analyses were exploited in a clinical setting, to assess reactivity in patients treated by either tumor-infiltrating lymphocyte (TIL) cell therapy or checkpoint blockade. Neoantigen identification was successful for multiple experimental model systems and human malignancies. The false-negative rate of cancer exome sequencing is low—i.e.: the majority of neoantigens occur within exonic sequence with sufficient coverage. However, the vast majority of mutations within expressed genes do not produce neoantigens that are recognized by autologous T cells. As of 2015 mass spectrometry resolution is insufficient to exclude many false positives from the pool of peptides that may be presented by MHC molecules. Instead, algorithms are used to identify the most likely candidates. These algorithms consider factors such as the likelihood of proteasomal processing, transport into the endoplasmic reticulum, affinity for the relevant MHC class I alleles and gene expression or protein translation levels. The majority of human neoantigens identified in unbiased screens display a high predicted MHC binding affinity. Minor histocompatibility antigens, a conceptually similar antigen class are also correctly identified by MHC binding algorithms. Another potential filter examines whether the mutation is expected to improve MHC binding. The nature of the central TCR-exposed residues of MHC-bound peptides is associated with peptide immunogenicity. Nativity A native antigen is an antigen that is not yet processed by an APC to smaller parts. T cells cannot bind native antigens, but require that they be processed by APCs, whereas B cells can be activated by native ones. Antigenic specificity Antigenic specificity is the ability of the host cells to recognize an antigen specifically as a unique molecular entity and distinguish it from another with exquisite precision. Antigen specificity is due primarily to the side-chain conformations of the antigen. It is measurable and need not be linear or of a rate-limited step or equation. Both T cells and B cells are cellular components of adaptive immunity. See also References Immune system Biomolecules
Antigen
[ "Chemistry", "Biology" ]
2,732
[ "Natural products", "Biochemistry", "Antigens", "Immune system", "Organic compounds", "Organ systems", "Biomolecules", "Molecular biology", "Structural biology" ]
1,941
https://en.wikipedia.org/wiki/Aeon
The word aeon , also spelled eon (in American and Australian English), originally meant "life", "vital force" or "being", "generation" or "a period of time", though it tended to be translated as "age" in the sense of "ages", "forever", "timeless" or "for eternity". It is a Latin transliteration from the ancient Greek word (), from the archaic () meaning "century". In Greek, it literally refers to the timespan of one hundred years. A cognate Latin word (cf. ) for "age" is present in words such as eternal, longevity and mediaeval. Although the term aeon may be used in reference to a period of a billion years (especially in geology, cosmology and astronomy), its more common usage is for any long, indefinite period. Aeon can also refer to the four aeons on the geologic time scale that make up the Earth's history, the Hadean, Archean, Proterozoic, and the current aeon, Phanerozoic. Astronomy and cosmology In astronomy, an aeon is defined as a billion years (109 years, abbreviated AE). Roger Penrose uses the word aeon to describe the period between successive and cyclic Big Bangs within the context of conformal cyclic cosmology. Philosophy and mysticism In Buddhism, an "aeon" or (Sanskrit: ) is often said to be 1,334,240,000 years, the life cycle of the world. Yet, these numbers are symbolic, not literal. Christianity's idea of "eternal life" comes from the word for life, (), and a form of (), which could mean life in the next aeon, the Kingdom of God, or Heaven, just as much as immortality, as in John 3:16. According to Christian universalism, the Greek New Testament scriptures use the word () to mean a long period and the word () to mean "during a long period"; thus, there was a time before the aeons, and the aeonian period is finite. After each person's mortal life ends, they are judged worthy of aeonian life or aeonian punishment. That is, after the period of the aeons, all punishment will cease and death is overcome and then God becomes the all in each one (1Cor 15:28). This contrasts with the conventional Christian belief in eternal life and eternal punishment. Occultists of the Thelema and Ordo Templi Orientis (English: "Order of the Temple of the East") traditions sometimes speak of a "magical Aeon" that may last for perhaps as little as 2,000 years. Gnosticism In many Gnostic systems, the various emanations of God, who is also known by such names as the One, the Monad, Aion teleos ("The Broadest Aeon", Greek: ), Bythos ("depth or profundity", Greek: ), Proarkhe ("before the beginning", Greek: ), ("the beginning", Greek: ), ("wisdom"), and ("the Anointed One"), are called Aeons. In the different systems these emanations are differently named, classified, and described, but the emanation theory itself is common to all forms of Gnosticism. In the Basilidian Gnosis they are called sonships ( ; singular: ); according to Marcus, they are numbers and sounds; in Valentinianism they form male/female pairs called "" (Greek , from ). See also Aion (deity) Kalpa (aeon) Saeculum – comparable Latin concept Aeon (company) References New Testament Greek words and phrases Time Units of time Gnosticism
Aeon
[ "Physics", "Mathematics" ]
804
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities", "Units of measurement" ]
1,962
https://en.wikipedia.org/wiki/Apparent%20magnitude
Apparent magnitude () is a measure of the brightness of a star, astronomical object or other celestial objects like artificial satellites. Its value depends on its intrinsic luminosity, its distance, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer. Unless stated otherwise, the word magnitude in astronomy usually refers to a celestial object's apparent magnitude. The magnitude scale likely dates to before the ancient Roman astronomer Claudius Ptolemy, whose star catalog popularized the system by listing stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined to closely match this historical system by Norman Pogson in 1856. The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to the brightness ratio of , or about 2.512. For example, a magnitude 2.0 star is 2.512 times as bright as a magnitude 3.0 star, 6.31 times as magnitude 4.0, and 100 times magnitude 7.0. The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from the Sun at −26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5. The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system. Measurement in the V-band may be referred to as the apparent visual magnitude. Absolute magnitude is a related quantity which measures the luminosity that a celestial object emits, rather than its apparent brightness when observed, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of . Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, references to "magnitude" are understood to mean apparent magnitude. Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution. Apparent magnitude is technically a measure of illuminance, which can also be measured in photometric units such as lux. History The scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes. The brightest stars in the night sky were said to be of first magnitude ( = 1), whereas the faintest were of sixth magnitude ( = 6), which is the limit of human visual perception (without the aid of a telescope). Each grade of magnitude was considered twice the brightness of the following grade (a logarithmic scale), although that ratio was subjective as no photodetectors existed. This rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is generally believed to have originated with Hipparchus. This cannot be proved or disproved because Hipparchus's original star catalogue is lost. The only preserved text by Hipparchus himself (a commentary to Aratus) clearly documents that he did not have a system to describe brightness with numbers: He always uses terms like "big" or "small", "bright" or "faint" or even descriptions such as "visible at full moon". In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star that is 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today. This implies that a star of magnitude is about 2.512 times as bright as a star of magnitude . This figure, the fifth root of 100, became known as Pogson's Ratio. The 1884 Harvard Photometry and 1886 Potsdamer Duchmusterung star catalogs popularized Pogson's ratio, and eventually it became a de facto standard in modern astronomy to describe differences in brightness. Defining and calibrating what magnitude 0.0 means is difficult, and different types of measurements which detect different kinds of light (possibly by using filters) have different zero points. Pogson's original 1856 paper defined magnitude 6.0 to be the faintest star the unaided eye can see, but the true limit for faintest possible visible star varies depending on the atmosphere and how high a star is in the sky. The Harvard Photometry used an average of 100 stars close to Polaris to define magnitude 5.0. Later, the Johnson UVB photometric system defined multiple types of photometric measurements with different filters, where magnitude 0.0 for each filter is defined to be the average of six stars with the same spectral type as Vega. This was done so the color index of these stars would be 0. Although this system is often called "Vega normalized", Vega is slightly dimmer than the six-star average used to define magnitude 0.0, meaning Vega's magnitude is normalized to 0.03 by definition. With the modern magnitude systems, brightness is described using Pogson's ratio. In practice, magnitude numbers rarely go above 30 before stars become too faint to detect. While Vega is close to magnitude 0, there are four brighter stars in the night sky at visible wavelengths (and more at infrared wavelengths) as well as the bright planets Venus, Mars, and Jupiter, and since brighter means smaller magnitude, these must be described by negative magnitudes. For example, Sirius, the brightest star of the celestial sphere, has a magnitude of −1.4 in the visible. Negative magnitudes for other very bright astronomical objects can be found in the table below. Astronomers have developed other photometric zero point systems as alternatives to Vega normalized systems. The most widely used is the AB magnitude system, in which photometric zero points are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zero point is defined such that an object's AB and Vega-based magnitudes will be approximately equal in the V filter band. However, the AB magnitude system is defined assuming an idealized detector measuring only one wavelength of light, while real detectors accept energy from a range of wavelengths. Measurement Precision measurement of magnitude (photometry) requires calibration of the photographic or (usually) electronic detection apparatus. This generally involves contemporaneous observation, under identical conditions, of standard stars whose magnitude using that spectral filter is accurately known. Moreover, as the amount of light actually received by a telescope is reduced due to transmission through the Earth's atmosphere, the airmasses of the target and calibration stars must be taken into account. Typically one would observe a few different stars of known magnitude which are sufficiently similar. Calibrator stars close in the sky to the target are favoured (to avoid large differences in the atmospheric paths). If those stars have somewhat different zenith angles (altitudes) then a correction factor as a function of airmass can be derived and applied to the airmass at the target's position. Such calibration obtains the brightness as would be observed from above the atmosphere, where apparent magnitude is defined. The apparent magnitude scale in astronomy reflects the received power of stars and not their amplitude. Newcomers should consider using the relative brightness measure in astrophotography to adjust exposure times between stars. Apparent magnitude also integrates over the entire object, regardless of its focus, and this needs to be taken into account when scaling exposure times for objects with significant apparent size, like the Sun, Moon and planets. For example, directly scaling the exposure time from the Moon to the Sun works because they are approximately the same size in the sky. However, scaling the exposure from the Moon to Saturn would result in an overexposure if the image of Saturn takes up a smaller area on your sensor than the Moon did (at the same magnification, or more generally, f/#). Calculations The dimmer an object appears, the higher the numerical value given to its magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of exactly 100. Therefore, the magnitude , in the spectral band , would be given by which is more commonly expressed in terms of common (base-10) logarithms as where is the observed irradiance using spectral filter , and is the reference flux (zero-point) for that photometric filter. Since an increase of 5 magnitudes corresponds to a decrease in brightness by a factor of exactly 100, each magnitude increase implies a decrease in brightness by the factor (Pogson's ratio). Inverting the above formula, a magnitude difference implies a brightness factor of Example: Sun and Moon What is the ratio in brightness between the Sun and the full Moon? The apparent magnitude of the Sun is −26.832 (brighter), and the mean magnitude of the full moon is −12.74 (dimmer). Difference in magnitude: Brightness factor: The Sun appears to be approximately times as bright as the full Moon. Magnitude addition Sometimes one might wish to add brightness. For example, photometry on closely separated double stars may only be able to produce a measurement of their combined light output. To find the combined magnitude of that double star knowing only the magnitudes of the individual components, this can be done by adding the brightness (in linear units) corresponding to each magnitude. Solving for yields where is the resulting magnitude after adding the brightnesses referred to by and . Apparent bolometric magnitude While magnitude generally refers to a measurement in a particular filter band corresponding to some range of wavelengths, the apparent or absolute bolometric magnitude (mbol) is a measure of an object's apparent or absolute brightness integrated over all wavelengths of the electromagnetic spectrum (also known as the object's irradiance or power, respectively). The zero point of the apparent bolometric magnitude scale is based on the definition that an apparent bolometric magnitude of 0 mag is equivalent to a received irradiance of 2.518×10−8 watts per square metre (W·m−2). Absolute magnitude While apparent magnitude is a measure of the brightness of an object as seen by a particular observer, absolute magnitude is a measure of the intrinsic brightness of an object. Flux decreases with distance according to an inverse-square law, so the apparent magnitude of a star depends on both its absolute brightness and its distance (and any extinction). For example, a star at one distance will have the same apparent magnitude as a star four times as bright at twice that distance. In contrast, the intrinsic brightness of an astronomical object, does not depend on the distance of the observer or any extinction. The absolute magnitude , of a star or astronomical object is defined as the apparent magnitude it would have as seen from a distance of . The absolute magnitude of the Sun is 4.83 in the V band (visual), 4.68 in the Gaia satellite's G band (green) and 5.48 in the B band (blue). In the case of a planet or asteroid, the absolute magnitude rather means the apparent magnitude it would have if it were from both the observer and the Sun, and fully illuminated at maximum opposition (a configuration that is only theoretically achievable, with the observer situated on the surface of the Sun). Standard reference values The magnitude scale is a reverse logarithmic scale. A common misconception is that the logarithmic nature of the scale is because the human eye itself has a logarithmic response. In Pogson's time this was thought to be true (see Weber–Fechner law), but it is now believed that the response is a power law . Magnitude is complicated by the fact that light is not monochromatic. The sensitivity of a light detector varies according to the wavelength of the light, and the way it varies depends on the type of light detector. For this reason, it is necessary to specify how the magnitude is measured for the value to be meaningful. For this purpose the UBV system is widely used, in which the magnitude is measured in three different wavelength bands: U (centred at about 350 nm, in the near ultraviolet), B (about 435 nm, in the blue region) and V (about 555 nm, in the middle of the human visual range in daylight). The V band was chosen for spectral purposes and gives magnitudes closely corresponding to those seen by the human eye. When an apparent magnitude is discussed without further qualification, the V magnitude is generally understood. Because cooler stars, such as red giants and red dwarfs, emit little energy in the blue and UV regions of the spectrum, their power is often under-represented by the UBV scale. Indeed, some L and T class stars have an estimated magnitude of well over 100, because they emit extremely little visible light, but are strongest in infrared. Measures of magnitude need cautious treatment and it is extremely important to measure like with like. On early 20th century and older orthochromatic (blue-sensitive) photographic film, the relative brightnesses of the blue supergiant Rigel and the red supergiant Betelgeuse irregular variable star (at maximum) are reversed compared to what human eyes perceive, because this archaic film is more sensitive to blue light than it is to red light. Magnitudes obtained from this method are known as photographic magnitudes, and are now considered obsolete. For objects within the Milky Way with a given absolute magnitude, 5 is added to the apparent magnitude for every tenfold increase in the distance to the object. For objects at very great distances (far beyond the Milky Way), this relationship must be adjusted for redshifts and for non-Euclidean distance measures due to general relativity. For planets and other Solar System bodies, the apparent magnitude is derived from its phase curve and the distances to the Sun and observer. List of apparent magnitudes Some of the listed magnitudes are approximate. Telescope sensitivity depends on observing time, optical bandpass, and interfering light from scattering and airglow. See also Angular diameter Distance modulus List of nearest bright stars List of nearest stars Luminosity Surface brightness References External links Observational astronomy Logarithmic scales of measurement
Apparent magnitude
[ "Physics", "Astronomy", "Mathematics" ]
3,043
[ "Physical quantities", "Quantity", "Observational astronomy", "Logarithmic scales of measurement", "Astronomical sub-disciplines" ]
1,997
https://en.wikipedia.org/wiki/Algebraic%20geometry
Algebraic geometry is a branch of mathematics which uses abstract algebraic techniques, mainly from commutative algebra, to solve geometrical problems. Classically, it studies zeros of multivariate polynomials; the modern approach generalizes this in a few different aspects. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves, and quartic curves like lemniscates and Cassini ovals. These are plane algebraic curves. A point of the plane lies on an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of points of special interest like singular points, inflection points and points at infinity. More advanced questions involve the topology of the curve and the relationship between curves defined by different equations. Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex analysis, topology and number theory. As a study of systems of polynomial equations in several variables, the subject of algebraic geometry begins with finding specific solutions via equation solving, and then proceeds to understand the intrinsic properties of the totality of solutions of a system of equations. This understanding requires both conceptual theory and computational technique. In the 20th century, algebraic geometry split into several subareas. The mainstream of algebraic geometry is devoted to the study of the complex points of the algebraic varieties and more generally to the points with coordinates in an algebraically closed field. Real algebraic geometry is the study of the real algebraic varieties. Diophantine geometry and, more generally, arithmetic geometry is the study of algebraic varieties over fields that are not algebraically closed and, specifically, over fields of interest in algebraic number theory, such as the field of rational numbers, number fields, finite fields, function fields, and p-adic fields. A large part of singularity theory is devoted to the singularities of algebraic varieties. Computational algebraic geometry is an area that has emerged at the intersection of algebraic geometry and computer algebra, with the rise of computers. It consists mainly of algorithm design and software development for the study of properties of explicitly given algebraic varieties. Much of the development of the mainstream of algebraic geometry in the 20th century occurred within an abstract algebraic framework, with increasing emphasis being placed on "intrinsic" properties of algebraic varieties not dependent on any particular way of embedding the variety in an ambient coordinate space; this parallels developments in topology, differential and complex geometry. One key achievement of this abstract algebraic geometry is Grothendieck's scheme theory which allows one to use sheaf theory to study algebraic varieties in a way which is very similar to its use in the study of differential and analytic manifolds. This is obtained by extending the notion of point: In classical algebraic geometry, a point of an affine variety may be identified, through Hilbert's Nullstellensatz, with a maximal ideal of the coordinate ring, while the points of the corresponding affine scheme are all prime ideals of this ring. This means that a point of such a scheme may be either a usual point or a subvariety. This approach also enables a unification of the language and the tools of classical algebraic geometry, mainly concerned with complex points, and of algebraic number theory. Wiles' proof of the longstanding conjecture called Fermat's Last Theorem is an example of the power of this approach. Basic notions Zeros of simultaneous polynomials In classical algebraic geometry, the main objects of interest are the vanishing sets of collections of polynomials, meaning the set of all points that simultaneously satisfy one or more polynomial equations. For instance, the two-dimensional sphere of radius 1 in three-dimensional Euclidean space R3 could be defined as the set of all points with A "slanted" circle in R3 can be defined as the set of all points which satisfy the two polynomial equations Affine varieties First we start with a field k. In classical algebraic geometry, this field was always the complex numbers C, but many of the same results are true if we assume only that k is algebraically closed. We consider the affine space of dimension n over k, denoted An(k) (or more simply An, when k is clear from the context). When one fixes a coordinate system, one may identify An(k) with kn. The purpose of not working with kn is to emphasize that one "forgets" the vector space structure that kn carries. A function f : An → A1 is said to be polynomial (or regular) if it can be written as a polynomial, that is, if there is a polynomial p in k[x1,...,xn] such that f(M) = p(t1,...,tn) for every point M with coordinates (t1,...,tn) in An. The property of a function to be polynomial (or regular) does not depend on the choice of a coordinate system in An. When a coordinate system is chosen, the regular functions on the affine n-space may be identified with the ring of polynomial functions in n variables over k. Therefore, the set of the regular functions on An is a ring, which is denoted k[An]. We say that a polynomial vanishes at a point if evaluating it at that point gives zero. Let S be a set of polynomials in k[An]. The vanishing set of S (or vanishing locus or zero set) is the set V(S) of all points in An where every polynomial in S vanishes. Symbolically, A subset of An which is V(S), for some S, is called an algebraic set. The V stands for variety (a specific type of algebraic set to be defined below). Given a subset U of An, can one recover the set of polynomials which generate it? If U is any subset of An, define I(U) to be the set of all polynomials whose vanishing set contains U. The I stands for ideal: if two polynomials f and g both vanish on U, then f+g vanishes on U, and if h is any polynomial, then hf vanishes on U, so I(U) is always an ideal of the polynomial ring k[An]. Two natural questions to ask are: Given a subset U of An, when is U = V(I(U))? Given a set S of polynomials, when is S = I(V(S))? The answer to the first question is provided by introducing the Zariski topology, a topology on An whose closed sets are the algebraic sets, and which directly reflects the algebraic structure of k[An]. Then U = V(I(U)) if and only if U is an algebraic set or equivalently a Zariski-closed set. The answer to the second question is given by Hilbert's Nullstellensatz. In one of its forms, it says that I(V(S)) is the radical of the ideal generated by S. In more abstract language, there is a Galois connection, giving rise to two closure operators; they can be identified, and naturally play a basic role in the theory; the example is elaborated at Galois connection. For various reasons we may not always want to work with the entire ideal corresponding to an algebraic set U. Hilbert's basis theorem implies that ideals in k[An] are always finitely generated. An algebraic set is called irreducible if it cannot be written as the union of two smaller algebraic sets. Any algebraic set is a finite union of irreducible algebraic sets and this decomposition is unique. Thus its elements are called the irreducible components of the algebraic set. An irreducible algebraic set is also called a variety. It turns out that an algebraic set is a variety if and only if it may be defined as the vanishing set of a prime ideal of the polynomial ring. Some authors do not make a clear distinction between algebraic sets and varieties and use irreducible variety to make the distinction when needed. Regular functions Just as continuous functions are the natural maps on topological spaces and smooth functions are the natural maps on differentiable manifolds, there is a natural class of functions on an algebraic set, called regular functions or polynomial functions. A regular function on an algebraic set V contained in An is the restriction to V of a regular function on An. For an algebraic set defined on the field of the complex numbers, the regular functions are smooth and even analytic. It may seem unnaturally restrictive to require that a regular function always extend to the ambient space, but it is very similar to the situation in a normal topological space, where the Tietze extension theorem guarantees that a continuous function on a closed subset always extends to the ambient topological space. Just as with the regular functions on affine space, the regular functions on V form a ring, which we denote by k[V]. This ring is called the coordinate ring of V. Since regular functions on V come from regular functions on An, there is a relationship between the coordinate rings. Specifically, if a regular function on V is the restriction of two functions f and g in k[An], then f − g is a polynomial function which is null on V and thus belongs to I(V). Thus k[V] may be identified with k[An]/I(V). Morphism of affine varieties Using regular functions from an affine variety to A1, we can define regular maps from one affine variety to another. First we will define a regular map from a variety into affine space: Let V be a variety contained in An. Choose m regular functions on V, and call them f1, ..., fm. We define a regular map f from V to Am by letting . In other words, each fi determines one coordinate of the range of f. If V′ is a variety contained in Am, we say that f is a regular map from V to V′ if the range of f is contained in V′. The definition of the regular maps apply also to algebraic sets. The regular maps are also called morphisms, as they make the collection of all affine algebraic sets into a category, where the objects are the affine algebraic sets and the morphisms are the regular maps. The affine varieties is a subcategory of the category of the algebraic sets. Given a regular map g from V to V′ and a regular function f of k[V′], then . The map is a ring homomorphism from k[V′] to k[V]. Conversely, every ring homomorphism from k[V′] to k[V] defines a regular map from V to V′. This defines an equivalence of categories between the category of algebraic sets and the opposite category of the finitely generated reduced k-algebras. This equivalence is one of the starting points of scheme theory. Rational function and birational equivalence In contrast to the preceding sections, this section concerns only varieties and not algebraic sets. On the other hand, the definitions extend naturally to projective varieties (next section), as an affine variety and its projective completion have the same field of functions. If V is an affine variety, its coordinate ring is an integral domain and has thus a field of fractions which is denoted k(V) and called the field of the rational functions on V or, shortly, the function field of V. Its elements are the restrictions to V of the rational functions over the affine space containing V. The domain of a rational function f is not V but the complement of the subvariety (a hypersurface) where the denominator of f vanishes. As with regular maps, one may define a rational map from a variety V to a variety V'. As with the regular maps, the rational maps from V to V' may be identified to the field homomorphisms from k(V') to k(V). Two affine varieties are birationally equivalent if there are two rational functions between them which are inverse one to the other in the regions where both are defined. Equivalently, they are birationally equivalent if their function fields are isomorphic. An affine variety is a rational variety if it is birationally equivalent to an affine space. This means that the variety admits a rational parameterization, that is a parametrization with rational functions. For example, the circle of equation is a rational curve, as it has the parametric equation which may also be viewed as a rational map from the line to the circle. The problem of resolution of singularities is to know if every algebraic variety is birationally equivalent to a variety whose projective completion is nonsingular (see also smooth completion). It was solved in the affirmative in characteristic 0 by Heisuke Hironaka in 1964 and is yet unsolved in finite characteristic. Projective variety Just as the formulas for the roots of second, third, and fourth degree polynomials suggest extending real numbers to the more algebraically complete setting of the complex numbers, many properties of algebraic varieties suggest extending affine space to a more geometrically complete projective space. Whereas the complex numbers are obtained by adding the number i, a root of the polynomial , projective space is obtained by adding in appropriate points "at infinity", points where parallel lines may meet. To see how this might come about, consider the variety . If we draw it, we get a parabola. As x goes to positive infinity, the slope of the line from the origin to the point (x, x2) also goes to positive infinity. As x goes to negative infinity, the slope of the same line goes to negative infinity. Compare this to the variety V(y − x3). This is a cubic curve. As x goes to positive infinity, the slope of the line from the origin to the point (x, x3) goes to positive infinity just as before. But unlike before, as x goes to negative infinity, the slope of the same line goes to positive infinity as well; the exact opposite of the parabola. So the behavior "at infinity" of V(y − x3) is different from the behavior "at infinity" of V(y − x2). The consideration of the projective completion of the two curves, which is their prolongation "at infinity" in the projective plane, allows us to quantify this difference: the point at infinity of the parabola is a regular point, whose tangent is the line at infinity, while the point at infinity of the cubic curve is a cusp. Also, both curves are rational, as they are parameterized by x, and the Riemann-Roch theorem implies that the cubic curve must have a singularity, which must be at infinity, as all its points in the affine space are regular. Thus many of the properties of algebraic varieties, including birational equivalence and all the topological properties, depend on the behavior "at infinity" and so it is natural to study the varieties in projective space. Furthermore, the introduction of projective techniques made many theorems in algebraic geometry simpler and sharper: For example, Bézout's theorem on the number of intersection points between two varieties can be stated in its sharpest form only in projective space. For these reasons, projective space plays a fundamental role in algebraic geometry. Nowadays, the projective space Pn of dimension n is usually defined as the set of the lines passing through a point, considered as the origin, in the affine space of dimension , or equivalently to the set of the vector lines in a vector space of dimension . When a coordinate system has been chosen in the space of dimension , all the points of a line have the same set of coordinates, up to the multiplication by an element of k. This defines the homogeneous coordinates of a point of Pn as a sequence of elements of the base field k, defined up to the multiplication by a nonzero element of k (the same for the whole sequence). A polynomial in variables vanishes at all points of a line passing through the origin if and only if it is homogeneous. In this case, one says that the polynomial vanishes at the corresponding point of Pn. This allows us to define a projective algebraic set in Pn as the set , where a finite set of homogeneous polynomials vanishes. Like for affine algebraic sets, there is a bijection between the projective algebraic sets and the reduced homogeneous ideals which define them. The projective varieties are the projective algebraic sets whose defining ideal is prime. In other words, a projective variety is a projective algebraic set, whose homogeneous coordinate ring is an integral domain, the projective coordinates ring being defined as the quotient of the graded ring or the polynomials in variables by the homogeneous (reduced) ideal defining the variety. Every projective algebraic set may be uniquely decomposed into a finite union of projective varieties. The only regular functions which may be defined properly on a projective variety are the constant functions. Thus this notion is not used in projective situations. On the other hand, the field of the rational functions or function field is a useful notion, which, similarly to the affine case, is defined as the set of the quotients of two homogeneous elements of the same degree in the homogeneous coordinate ring. Real algebraic geometry Real algebraic geometry is the study of real algebraic varieties. The fact that the field of the real numbers is an ordered field cannot be ignored in such a study. For example, the curve of equation is a circle if , but has no real points if . Real algebraic geometry also investigates, more broadly, semi-algebraic sets, which are the solutions of systems of polynomial inequalities. For example, neither branch of the hyperbola of equation is a real algebraic variety. However, the branch in the first quadrant is a semi-algebraic set defined by and . One open problem in real algebraic geometry is the following part of Hilbert's sixteenth problem: Decide which respective positions are possible for the ovals of a nonsingular plane curve of degree 8. Computational algebraic geometry One may date the origin of computational algebraic geometry to meeting EUROSAM'79 (International Symposium on Symbolic and Algebraic Manipulation) held at Marseille, France, in June 1979. At this meeting, Dennis S. Arnon showed that George E. Collins's Cylindrical algebraic decomposition (CAD) allows the computation of the topology of semi-algebraic sets, Bruno Buchberger presented Gröbner bases and his algorithm to compute them, Daniel Lazard presented a new algorithm for solving systems of homogeneous polynomial equations with a computational complexity which is essentially polynomial in the expected number of solutions and thus simply exponential in the number of the unknowns. This algorithm is strongly related with Macaulay's multivariate resultant. Since then, most results in this area are related to one or several of these items either by using or improving one of these algorithms, or by finding algorithms whose complexity is simply exponential in the number of the variables. A body of mathematical theory complementary to symbolic methods called numerical algebraic geometry has been developed over the last several decades. The main computational method is homotopy continuation. This supports, for example, a model of floating point computation for solving problems of algebraic geometry. Gröbner basis A Gröbner basis is a system of generators of a polynomial ideal whose computation allows the deduction of many properties of the affine algebraic variety defined by the ideal. Given an ideal I defining an algebraic set V: V is empty (over an algebraically closed extension of the basis field), if and only if the Gröbner basis for any monomial ordering is reduced to {1}. By means of the Hilbert series one may compute the dimension and the degree of V from any Gröbner basis of I for a monomial ordering refining the total degree. If the dimension of V is 0, one may compute the points (finite in number) of V from any Gröbner basis of I (see Systems of polynomial equations). A Gröbner basis computation allows one to remove from V all irreducible components which are contained in a given hypersurface. A Gröbner basis computation allows one to compute the Zariski closure of the image of V by the projection on the k first coordinates, and the subset of the image where the projection is not proper. More generally Gröbner basis computations allow one to compute the Zariski closure of the image and the critical points of a rational function of V into another affine variety. Gröbner basis computations do not allow one to compute directly the primary decomposition of I nor the prime ideals defining the irreducible components of V, but most algorithms for this involve Gröbner basis computation. The algorithms which are not based on Gröbner bases use regular chains but may need Gröbner bases in some exceptional situations. Gröbner bases are deemed to be difficult to compute. In fact they may contain, in the worst case, polynomials whose degree is doubly exponential in the number of variables and a number of polynomials which is also doubly exponential. However, this is only a worst case complexity, and the complexity bound of Lazard's algorithm of 1979 may frequently apply. Faugère F5 algorithm realizes this complexity, as it may be viewed as an improvement of Lazard's 1979 algorithm. It follows that the best implementations allow one to compute almost routinely with algebraic sets of degree more than 100. This means that, presently, the difficulty of computing a Gröbner basis is strongly related to the intrinsic difficulty of the problem. Cylindrical algebraic decomposition (CAD) CAD is an algorithm which was introduced in 1973 by G. Collins to implement with an acceptable complexity the Tarski–Seidenberg theorem on quantifier elimination over the real numbers. This theorem concerns the formulas of the first-order logic whose atomic formulas are polynomial equalities or inequalities between polynomials with real coefficients. These formulas are thus the formulas which may be constructed from the atomic formulas by the logical operators and (∧), or (∨), not (¬), for all (∀) and exists (∃). Tarski's theorem asserts that, from such a formula, one may compute an equivalent formula without quantifier (∀, ∃). The complexity of CAD is doubly exponential in the number of variables. This means that CAD allows, in theory, to solve every problem of real algebraic geometry which may be expressed by such a formula, that is almost every problem concerning explicitly given varieties and semi-algebraic sets. While Gröbner basis computation has doubly exponential complexity only in rare cases, CAD has almost always this high complexity. This implies that, unless if most polynomials appearing in the input are linear, it may not solve problems with more than four variables. Since 1973, most of the research on this subject is devoted either to improving CAD or finding alternative algorithms in special cases of general interest. As an example of the state of art, there are efficient algorithms to find at least a point in every connected component of a semi-algebraic set, and thus to test if a semi-algebraic set is empty. On the other hand, CAD is yet, in practice, the best algorithm to count the number of connected components. Asymptotic complexity vs. practical efficiency The basic general algorithms of computational geometry have a double exponential worst case complexity. More precisely, if d is the maximal degree of the input polynomials and n the number of variables, their complexity is at most for some constant c, and, for some inputs, the complexity is at least for another constant c′. During the last 20 years of the 20th century, various algorithms have been introduced to solve specific subproblems with a better complexity. Most of these algorithms have a complexity . Among these algorithms which solve a sub problem of the problems solved by Gröbner bases, one may cite testing if an affine variety is empty and solving nonhomogeneous polynomial systems which have a finite number of solutions. Such algorithms are rarely implemented because, on most entries Faugère's F4 and F5 algorithms have a better practical efficiency and probably a similar or better complexity (probably because the evaluation of the complexity of Gröbner basis algorithms on a particular class of entries is a difficult task which has been done only in a few special cases). The main algorithms of real algebraic geometry which solve a problem solved by CAD are related to the topology of semi-algebraic sets. One may cite counting the number of connected components, testing if two points are in the same components or computing a Whitney stratification of a real algebraic set. They have a complexity of , but the constant involved by O notation is so high that using them to solve any nontrivial problem effectively solved by CAD, is impossible even if one could use all the existing computing power in the world. Therefore, these algorithms have never been implemented and this is an active research area to search for algorithms with have together a good asymptotic complexity and a good practical efficiency. Abstract modern viewpoint The modern approaches to algebraic geometry redefine and effectively extend the range of basic objects in various levels of generality to schemes, formal schemes, ind-schemes, algebraic spaces, algebraic stacks and so on. The need for this arises already from the useful ideas within theory of varieties, e.g. the formal functions of Zariski can be accommodated by introducing nilpotent elements in structure rings; considering spaces of loops and arcs, constructing quotients by group actions and developing formal grounds for natural intersection theory and deformation theory lead to some of the further extensions. Most remarkably, in the early 1960s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra which are locally ringed spaces which form a category which is antiequivalent to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set theoretic sense is then replaced by a Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc; nowadays some other examples became prominent including Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks. Sometimes other algebraic sites replace the category of affine schemes. For example, Nikolai Durov has introduced commutative algebraic monads as a generalization of local objects in a generalized algebraic geometry. Versions of a tropical geometry, of an absolute geometry over a field of one element and an algebraic analogue of Arakelov's geometry were realized in this setup. Another formal generalization is possible to universal algebraic geometry in which every variety of algebras has its own algebraic geometry. The term variety of algebras should not be confused with algebraic variety. The language of schemes, stacks and generalizations has proved to be a valuable way of dealing with geometric concepts and became cornerstones of modern algebraic geometry. Algebraic stacks can be further generalized and for many practical questions like deformation theory and intersection theory, this is often the most natural approach. One can extend the Grothendieck site of affine schemes to a higher categorical site of derived affine schemes, by replacing the commutative rings with an infinity category of differential graded commutative algebras, or of simplicial commutative rings or a similar category with an appropriate variant of a Grothendieck topology. One can also replace presheaves of sets by presheaves of simplicial sets (or of infinity groupoids). Then, in presence of an appropriate homotopic machinery one can develop a notion of derived stack as such a presheaf on the infinity category of derived affine schemes, which is satisfying certain infinite categorical version of a sheaf axiom (and to be algebraic, inductively a sequence of representability conditions). Quillen model categories, Segal categories and quasicategories are some of the most often used tools to formalize this yielding the derived algebraic geometry, introduced by the school of Carlos Simpson, including Andre Hirschowitz, Bertrand Toën, Gabrielle Vezzosi, Michel Vaquié and others; and developed further by Jacob Lurie, Bertrand Toën, and Gabriele Vezzosi. Another (noncommutative) version of derived algebraic geometry, using A-infinity categories has been developed from the early 1990s by Maxim Kontsevich and followers. History Before the 16th century Some of the roots of algebraic geometry date back to the work of the Hellenistic Greeks from the 5th century BC. The Delian problem, for instance, was to construct a length x so that the cube of side x contained the same volume as the rectangular box a2b for given sides a and b. Menaechmus () considered the problem geometrically by intersecting the pair of plane conics ay = x2 and xy = ab. In the 3rd century BC, Archimedes and Apollonius systematically studied additional problems on conic sections using coordinates. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding coordinates using geometric methods like using parabolas and curves. Medieval mathematicians, including Omar Khayyam, Leonardo of Pisa, Gersonides and Nicole Oresme in the Medieval Period, solved certain cubic and quadratic equations by purely algebraic means and then interpreted the results geometrically. The Persian mathematician Omar Khayyám (born 1048 AD) believed that there was a relationship between arithmetic, algebra and geometry. This was criticized by Jeffrey Oaks, who claims that the study of curves by means of equations originated with Descartes in the seventeenth century. Renaissance Such techniques of applying geometrical constructions to algebraic problems were also adopted by a number of Renaissance mathematicians such as Gerolamo Cardano and Niccolò Fontana "Tartaglia" on their studies of the cubic equation. The geometrical approach to construction problems, rather than the algebraic one, was favored by most 16th and 17th century mathematicians, notably Blaise Pascal who argued against the use of algebraic and analytical methods in geometry. The French mathematicians Franciscus Vieta and later René Descartes and Pierre de Fermat revolutionized the conventional way of thinking about construction problems through the introduction of coordinate geometry. They were interested primarily in the properties of algebraic curves, such as those defined by Diophantine equations (in the case of Fermat), and the algebraic reformulation of the classical Greek works on conics and cubics (in the case of Descartes). During the same period, Blaise Pascal and Gérard Desargues approached geometry from a different perspective, developing the synthetic notions of projective geometry. Pascal and Desargues also studied curves, but from the purely geometrical point of view: the analog of the Greek ruler and compass construction. Ultimately, the analytic geometry of Descartes and Fermat won out, for it supplied the 18th century mathematicians with concrete quantitative tools needed to study physical problems using the new calculus of Newton and Leibniz. However, by the end of the 18th century, most of the algebraic character of coordinate geometry was subsumed by the calculus of infinitesimals of Lagrange and Euler. 19th and early 20th century It took the simultaneous 19th century developments of non-Euclidean geometry and Abelian integrals in order to bring the old algebraic ideas back into the geometrical fold. The first of these new developments was seized up by Edmond Laguerre and Arthur Cayley, who attempted to ascertain the generalized metric properties of projective space. Cayley introduced the idea of homogeneous polynomial forms, and more specifically quadratic forms, on projective space. Subsequently, Felix Klein studied projective geometry (along with other types of geometry) from the viewpoint that the geometry on a space is encoded in a certain class of transformations on the space. By the end of the 19th century, projective geometers were studying more general kinds of transformations on figures in projective space. Rather than the projective linear transformations which were normally regarded as giving the fundamental Kleinian geometry on projective space, they concerned themselves also with the higher degree birational transformations. This weaker notion of congruence would later lead members of the 20th century Italian school of algebraic geometry to classify algebraic surfaces up to birational isomorphism. The second early 19th century development, that of Abelian integrals, would lead Bernhard Riemann to the development of Riemann surfaces. In the same period began the algebraization of the algebraic geometry through commutative algebra. The prominent results in this direction are Hilbert's basis theorem and Hilbert's Nullstellensatz, which are the basis of the connection between algebraic geometry and commutative algebra, and Macaulay's multivariate resultant, which is the basis of elimination theory. Probably because of the size of the computation which is implied by multivariate resultants, elimination theory was forgotten during the middle of the 20th century until it was renewed by singularity theory and computational algebraic geometry. 20th century B. L. van der Waerden, Oscar Zariski and André Weil developed a foundation for algebraic geometry based on contemporary commutative algebra, including valuation theory and the theory of ideals. One of the goals was to give a rigorous framework for proving the results of the Italian school of algebraic geometry. In particular, this school used systematically the notion of generic point without any precise definition, which was first given by these authors during the 1930s. In the 1950s and 1960s, Jean-Pierre Serre and Alexander Grothendieck recast the foundations making use of sheaf theory. Later, from about 1960, and largely led by Grothendieck, the idea of schemes was worked out, in conjunction with a very refined apparatus of homological techniques. After a decade of rapid development the field stabilized in the 1970s, and new applications were made, both to number theory and to more classical geometric questions on algebraic varieties, singularities, moduli, and formal moduli. An important class of varieties, not easily understood directly from their defining equations, are the abelian varieties, which are the projective varieties whose points form an abelian group. The prototypical examples are the elliptic curves, which have a rich theory. They were instrumental in the proof of Fermat's Last Theorem and are also used in elliptic-curve cryptography. In parallel with the abstract trend of the algebraic geometry, which is concerned with general statements about varieties, methods for effective computation with concretely-given varieties have also been developed, which lead to the new area of computational algebraic geometry. One of the founding methods of this area is the theory of Gröbner bases, introduced by Bruno Buchberger in 1965. Another founding method, more specially devoted to real algebraic geometry, is the cylindrical algebraic decomposition, introduced by George E. Collins in 1973. See also: derived algebraic geometry. Analytic geometry An analytic variety over the field of real or complex numbers is defined locally as the set of common solutions of several equations involving analytic functions. It is analogous to the concept of algebraic variety in that it carries a structure sheaf of analytic functions instead of regular functions. Any complex manifold is a complex analytic variety. Since analytic varieties may have singular points, not all complex analytic varieties are manifolds. Over a non-archimedean field analytic geometry is studied via rigid analytic spaces. Modern analytic geometry over the field of complex numbers is closely related to complex algebraic geometry, as has been shown by Jean-Pierre Serre in his paper GAGA, the name of which is French for Algebraic geometry and analytic geometry. The GAGA results over the field of complex numbers may be extended to rigid analytic spaces over non-archimedean fields. Applications Algebraic geometry now finds applications in statistics, control theory, robotics, error-correcting codes, phylogenetics and geometric modelling. There are also connections to string theory, game theory, graph matchings, solitons and integer programming. See also Glossary of classical algebraic geometry Important publications in algebraic geometry List of algebraic surfaces Noncommutative algebraic geometry Notes References Sources Further reading Some classic textbooks that predate schemes Modern textbooks that do not use the language of schemes Textbooks in computational algebraic geometry Textbooks and references for schemes External links Foundations of Algebraic Geometry by Ravi Vakil, 808 pp. Algebraic geometry entry on PlanetMath English translation of the van der Waerden textbook The Stacks Project, an open source textbook and reference work on algebraic stacks and algebraic geometry Adjectives Project, an online database for searching examples of schemes and morphisms based on their properties
Algebraic geometry
[ "Mathematics" ]
7,744
[ "Fields of abstract algebra", "Algebraic geometry" ]
2,039
https://en.wikipedia.org/wiki/Avionics
Avionics (a portmanteau of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform. History The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics". Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy. They required a two-seat aircraft with a second crewman who operated a telegraph key to spell out messages in Morse code. During World War I, AM voice two way radio sets were made possible in 1917 (see TM (triode)) by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying. Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnetron vacuum tube, in the famous Tizard Mission, significantly shortened the war. Modern avionics is a substantial portion of military aircraft spending. Aircraft like the F-15E and the now retired F-14 have roughly 20 percent of their budget spent on avionics. Most modern helicopters now have budget splits of 60/40 in favour of avionics. The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-wire) and new navigation needs brought on by tighter airspaces, have pushed up development costs. The major change has been the recent boom in consumer flying. As more people begin to use planes as their primary method of transportation, more elaborate methods of controlling aircraft safely in these high restrictive airspaces have been invented. Modern avionics Avionics plays a heavy role in modernization initiatives like the Federal Aviation Administration's (FAA) Next Generation Air Transportation System project in the United States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint Planning and Development Office put forth a roadmap for avionics in six areas: Published Routes and Procedures – Improved navigation and routing Negotiated Trajectories – Adding data communications to create preferred routes dynamically Delegated Separation – Enhanced situational awareness in the air and on the ground LowVisibility/CeilingApproach/Departure – Allowing operations with weather constraints with less ground infrastructure Surface Operations – To increase safety in approach and departure ATM Efficiencies – Improving the air traffic management (ATM) process Market The Aircraft Electronics Association reports $1.73 billion avionics sales for the first three quarters of 2017 in business and general aviation, a 4.1% yearly improvement: 73.5% came from North America, forward-fit represented 42.3% while 57.7% were retrofits as the U.S. deadline of January 1, 2020 for mandatory ADS-B out approach. Aircraft avionics The cockpit or, in larger aircraft, under the cockpit of an aircraft or in a movable nosecone, is a typical location for avionic bay equipment, including control, monitoring, communication, navigation, weather, and anti-collision systems. The majority of aircraft power their avionics using 14- or 28‑volt DC electrical systems; however, larger, more sophisticated aircraft (such as airliners or military combat aircraft) have AC systems operating at 115 volts 400 Hz, AC. There are several major vendors of flight avionics, including The Boeing Company, Panasonic Avionics Corporation, Honeywell (which now owns Bendix/King), Universal Avionics Systems Corporation, Rockwell Collins (now Collins Aerospace), Thales Group, GE Aviation Systems, Garmin, Raytheon, Parker Hannifin, UTC Aerospace Systems (now Collins Aerospace), Selex ES (now Leonardo), Shadin Avionics, and Avidyne Corporation. International standards for avionics equipment are prepared by the Airlines Electronic Engineering Committee (AEEC) and published by ARINC. Avionics Installation Avionics installation is a critical aspect of modern aviation, ensuring that aircraft are equipped with the necessary electronic systems for safe and efficient operation. These systems encompass a wide range of functions, including communication, navigation, monitoring, flight control, and weather detection. Avionics installations are performed on all types of aircraft, from small general aviation planes to large commercial jets and military aircraft. Installation Process The installation of avionics requires a combination of technical expertise, precision, and adherence to stringent regulatory standards. The process typically involves: Planning and Design: Before installation, the avionics shop works closely with the aircraft owner to determine the required systems based on the aircraft type, intended use, and regulatory requirements. Custom instrument panels are often designed to accommodate the new systems. Wiring and Integration: Avionics systems are integrated into the aircraft’s electrical and control systems, with wiring often requiring laser marking for durability and identification. Shops use detailed schematics to ensure correct installation. Testing and Calibration: After installation, each system must be thoroughly tested and calibrated to ensure proper function. This includes ground testing, flight testing, and system alignment with regulatory standards such as those set by the FAA. Certification: Once the systems are installed and tested, the avionics shop completes the necessary certifications. In the U.S., this often involves compliance with FAA Part 91.411 and 91.413 for IFR (Instrument Flight Rules) operations, as well as RVSM (Reduced Vertical Separation Minimum) certification. Regulatory Standards Avionics installation is governed by strict regulatory frameworks to ensure the safety and reliability of aircraft systems. In the United States, the Federal Aviation Administration (FAA) sets the standards for avionics installations. These include guidelines for: System Performance: Avionics systems must meet performance benchmarks as defined by the FAA, ensuring they function correctly in all phases of flight. Certification: Shops performing installations must be FAA-certified, and their technicians often hold certifications such as the General Radiotelephone Operator License (GROL). Inspections: Aircraft equipped with newly installed avionics systems must undergo rigorous inspections before being cleared for flight, including both ground and flight tests. Advancements in Avionics Technology The field of avionics has seen rapid technological advancements in recent years, leading to more integrated and automated systems. Key trends include: Glass Cockpits: Traditional analog gauges are being replaced by fully integrated glass cockpit displays, providing pilots with a centralized view of all flight parameters. NextGen Technologies: ADS-B and satellite-based navigation are part of the FAA’s NextGen initiative, aimed at modernizing air traffic control and improving the efficiency of the national airspace. Autonomous Systems: Advances in artificial intelligence and machine learning are paving the way for more autonomous aircraft systems, enhancing safety and reducing pilot workload. Communications Communications connect the flight deck to the ground and the flight deck to the passengers. On‑board communications are provided by public-address systems and aircraft intercoms. The VHF aviation communication system works on the airband of 118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the conversation is performed in simplex mode. Aircraft communication can also take place using HF (especially for trans-oceanic flights) or satellite communication. Navigation Air navigation is the determination of position and direction on or above the surface of the Earth. Avionics can use satellite navigation systems (such as GPS and WAAS), inertial navigation system (INS), ground-based radio navigation systems (such as VOR or LORAN), or any combination thereof. Some navigation systems such as GPS calculate the position automatically and display it to the flight crew on moving map displays. Older ground-based Navigation systems such as VOR or LORAN requires a pilot or navigator to plot the intersection of signals on a paper map to determine an aircraft's location; modern systems calculate the position automatically and display it to the flight crew on moving map displays. Monitoring The first hints of glass cockpits emerged in the 1970s when flight-worthy cathode-ray tube (CRT) screens began to replace electromechanical displays, gauges and instruments. A "glass" cockpit refers to the use of computer monitors instead of gauges and other analog displays. Aircraft were getting progressively more displays, dials and information dashboards that eventually competed for space and pilot attention. In the 1970s, the average aircraft had more than 100 cockpit instruments and controls. Glass cockpits started to come into being with the Gulfstream G‑IV private jet in 1985. One of the key challenges in glass cockpits is to balance how much control is automated and how much the pilot should do manually. Generally they try to automate flight operations while keeping the pilot constantly informed. Aircraft flight-control system Aircraft have means of automatically controlling flight. Autopilot was first invented by Lawrence Sperry during World War I to fly bomber planes steady enough to hit accurate targets from 25,000 feet. When it was first adopted by the U.S. military, a Honeywell engineer sat in the back seat with bolt cutters to disconnect the autopilot in case of emergency. Nowadays most commercial planes are equipped with aircraft flight control systems in order to reduce pilot error and workload at landing or takeoff. The first simple commercial auto-pilots were used to control heading and altitude and had limited authority on things like thrust and flight control surfaces. In helicopters, auto-stabilization was used in a similar way. The first systems were electromechanical. The advent of fly-by-wire and electro-actuated flight surfaces (rather than the traditional hydraulic) has increased safety. As with displays and instruments, critical devices that were electro-mechanical had a finite life. With safety critical systems, the software is very strictly tested. Fuel Systems Fuel Quantity Indication System (FQIS) monitors the amount of fuel aboard. Using various sensors, such as capacitance tubes, temperature sensors, densitometers & level sensors, the FQIS computer calculates the mass of fuel remaining on board. Fuel Control and Monitoring System (FCMS) reports fuel remaining on board in a similar manner, but, by controlling pumps & valves, also manages fuel transfers around various tanks. Refuelling control to upload to a certain total mass of fuel and distribute it automatically. Transfers during flight to the tanks that feed the engines. E.G. from fuselage to wing tanks Centre of gravity control transfers from the tail (trim) tanks forward to the wings as fuel is expended Maintaining fuel in the wing tips (to alleviate wing bending due to lift in flight) & transferring to the main tanks after landing Controlling fuel jettison during an emergency to reduce the aircraft weight. Collision-avoidance systems To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution. To help avoid controlled flight into terrain (CFIT), aircraft use systems such as ground-proximity warning systems (GPWS), which use radar altimeters as a key element. One of the major weaknesses of GPWS is the lack of "look-ahead" information, because it only provides altitude above terrain "look-down". In order to overcome this weakness, modern aircraft use a terrain awareness warning system (TAWS). Flight recorders Commercial aircraft cockpit data recorders, commonly known as "black boxes", store flight information and audio from the cockpit. They are often recovered from an aircraft after a crash to determine control settings and other parameters during the incident. Weather systems Weather systems such as weather radar (typically Arinc 708 on commercial aircraft) and lightning detectors are important for aircraft flying at night or in instrument meteorological conditions, where it is not possible for pilots to see the weather ahead. Heavy precipitation (as sensed by radar) or severe turbulence (as sensed by lightning activity) are both indications of strong convective activity and severe turbulence, and weather systems allow pilots to deviate around these areas. Lightning detectors like the Stormscope or Strikefinder have become inexpensive enough that they are practical for light aircraft. In addition to radar and lightning detection, observations and extended radar pictures (such as NEXRAD) are now available through satellite data connections, allowing pilots to see weather conditions far beyond the range of their own in-flight systems. Modern displays allow weather information to be integrated with moving maps, terrain, and traffic onto a single screen, greatly simplifying navigation. Modern weather systems also include wind shear and turbulence detection and terrain and traffic warning systems. In‑plane weather avionics are especially popular in Africa, India, and other countries where air-travel is a growing market, but ground support is not as well developed. Aircraft management systems There has been a progression towards centralized control of the multiple complex systems fitted to aircraft, including engine monitoring and management. Health and usage monitoring systems (HUMS) are integrated with aircraft management computers to give maintainers early warnings of parts that will need replacement. The integrated modular avionics concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. It has been used in fourth generation jet fighters and the latest generation of airliners. Mission or tactical avionics Military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems. The vast array of sensors available to the military is used for whatever tactical means required. As with aircraft management, the bigger sensor platforms (like the E‑3D, JSTARS, ASTOR, Nimrod MRA4, Merlin HM Mk 1) have mission-management computers. Police and EMS aircraft also carry sophisticated tactical sensors. Military communications While aircraft communications provide the backbone for safe flight, the tactical systems are designed to withstand the rigors of the battle field. UHF, VHF Tactical (30–88 MHz) and SatCom systems combined with ECCM methods, and cryptography secure the communications. Data links such as Link 11, 16, 22 and BOWMAN, JTRS and even TETRA provide the means of transmitting data (such as images, targeting information etc.). Radar Airborne radar was one of the first tactical sensors. The benefit of altitude providing range has meant a significant focus on airborne radar technologies. Radars include airborne early warning (AEW), anti-submarine warfare (ASW), and even weather radar (Arinc 708) and ground tracking/proximity radar. The military uses radar in fast jets to help pilots fly at low levels. While the civil market has had weather radar for a while, there are strict rules about using it to navigate the aircraft. Sonar Dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats. Maritime support aircraft can drop active and passive sonar devices (sonobuoys) and these are also used to determine the location of enemy submarines. Electro-optics Electro-optic systems include devices such as the head-up display (HUD), forward looking infrared (FLIR), infrared search and track and other passive infrared devices (Passive infrared sensor). These are all used to provide imagery and information to the flight crew. This imagery is used for everything from search and rescue to navigational aids and target acquisition. ESM/DAS Electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats. They can be used to launch devices (in some cases automatically) to counter direct threats against the aircraft. They are also used to determine the state of a threat and identify it. Aircraft networks The avionics systems in military, commercial and advanced models of civilian aircraft are interconnected using an avionics databus. Common avionics databus protocols, with their primary application, include: Aircraft Data Network (ADN): Ethernet derivative for Commercial Aircraft Avionics Full-Duplex Switched Ethernet (AFDX): Specific implementation of ARINC 664 (ADN) for Commercial Aircraft ARINC 429: Generic Medium-Speed Data Sharing for Private and Commercial Aircraft ARINC 664: See ADN above ARINC 629: Commercial Aircraft (Boeing 777) ARINC 708: Weather Radar for Commercial Aircraft ARINC 717: Flight Data Recorder for Commercial Aircraft ARINC 825: CAN bus for commercial aircraft (for example Boeing 787 and Airbus A350) Commercial Standard Digital Bus IEEE 1394b: Military Aircraft MIL-STD-1553: Military Aircraft MIL-STD-1760: Military Aircraft TTP – Time-Triggered Protocol: Boeing 787, Airbus A380, Fly-By-Wire Actuation Platforms from Parker Aerospace See also Astrionics, similar, for spacecraft Acronyms and abbreviations in avionics Avionics software Emergency locator beacon Emergency position-indicating radiobeacon station Integrated modular avionics Notes Further reading Avionics: Development and Implementation by Cary R. Spitzer (Hardcover – December 15, 2006) Principles of Avionics, 4th Edition by Albert Helfrick, Len Buckwalter, and Avionics Communications Inc. (Paperback – July 1, 2007) Avionics Training: Systems, Installation, and Troubleshooting by Len Buckwalter (Paperback – June 30, 2005) Avionics Made Simple, by Mouhamed Abdulla, Ph.D.; Jaroslav V. Svoboda, Ph.D. and Luis Rodrigues, Ph.D. (Coursepack – Dec. 2005 - ). External links Avionics in Commercial Aircraft Aircraft Electronics Association (AEA) Pilot's Guide to Avionics The Avionic Systems Standardisation Committee Space Shuttle Avionics Aviation Today Avionics magazine RAES Avionics homepage Aircraft instruments Spacecraft components Electronic engineering
Avionics
[ "Technology", "Engineering" ]
3,946
[ "Computer engineering", "Avionics", "Measuring instruments", "Electronic engineering", "Aircraft instruments", "Electrical engineering" ]
2,112
https://en.wikipedia.org/wiki/Associative%20algebra
In mathematics, an associative algebra A over a commutative ring (often a field) K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication (the multiplication by the image of the ring homomorphism of an element of K). The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication. A commutative algebra is an associative algebra for which the multiplication is commutative, or, equivalently, an associative algebra that is also a commutative ring. In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras. We will also assume that all rings are unital, and all ring homomorphisms are unital. Every ring is an associative algebra over its center and over the integers. Definition Let R be a commutative ring (so R could be a field). An associative R-algebra A (or more simply, an R-algebra A) is a ring A that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies for all r in R and x, y in the algebra. (This definition implies that the algebra, being a ring, is unital, since rings are supposed to have a multiplicative identity.) Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by . (See also below). Every ring is an associative Z-algebra, where Z denotes the ring of the integers. A is an associative algebra that is also a commutative ring. As a monoid object in the category of modules The definition is equivalent to saying that a unital associative R-algebra is a monoid object in R-Mod (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules. Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules. Indeed, this reinterpretation allows one to avoid making an explicit reference to elements of an algebra A. For example, the associativity can be expressed as follows. By the universal property of a tensor product of modules, the multiplication (the R-bilinear map) corresponds to a unique R-linear map . The associativity then refers to the identity: From ring homomorphisms An associative algebra amounts to a ring homomorphism whose image lies in the center. Indeed, starting with a ring A and a ring homomorphism whose image lies in the center of A, we can make A an R-algebra by defining for all and . If A is an R-algebra, taking , the same formula in turn defines a ring homomorphism whose image lies in the center. If a ring is commutative then it equals its center, so that a commutative R-algebra can be defined simply as a commutative ring A together with a commutative ring homomorphism . The ring homomorphism η appearing in the above is often called a structure map. In the commutative case, one can consider the category whose objects are ring homomorphisms for a fixed R, i.e., commutative R-algebras, and whose morphisms are ring homomorphisms that are under R; i.e., is (i.e., the coslice category of the category of commutative rings under R.) The prime spectrum functor Spec then determines an anti-equivalence of this category to the category of affine schemes over Spec R. How to weaken the commutativity assumption is a subject matter of noncommutative algebraic geometry and, more recently, of derived algebraic geometry. See also: Generic matrix ring. Algebra homomorphisms A homomorphism between two R-algebras is an R-linear ring homomorphism. Explicitly, is an associative algebra homomorphism if The class of all R-algebras together with algebra homomorphisms between them form a category, sometimes denoted R-Alg. The subcategory of commutative R-algebras can be characterized as the coslice category R/CRing where CRing is the category of commutative rings. Examples The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics. Algebra Any ring A can be considered as a Z-algebra. The unique ring homomorphism from Z to A is determined by the fact that it must send 1 to the identity in A. Therefore, rings and Z-algebras are equivalent concepts, in the same way that abelian groups and Z-modules are equivalent. Any ring of characteristic n is a (Z/nZ)-algebra in the same way. Given an R-module M, the endomorphism ring of M, denoted EndR(M) is an R-algebra by defining . Any ring of matrices with coefficients in a commutative ring R forms an R-algebra under matrix addition and multiplication. This coincides with the previous example when M is a finitely-generated, free R-module. In particular, the square n-by-n matrices with entries from the field K form an associative algebra over K. The complex numbers form a 2-dimensional commutative algebra over the real numbers. The quaternions form a 4-dimensional associative algebra over the reals (but not an algebra over the complex numbers, since the complex numbers are not in the center of the quaternions). Every polynomial ring is a commutative R-algebra. In fact, this is the free commutative R-algebra on the set . The free R-algebra on a set E is an algebra of "polynomials" with coefficients in R and noncommuting indeterminates taken from the set E. The tensor algebra of an R-module is naturally an associative R-algebra. The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor that maps an R-module to its tensor algebra is left adjoint to the functor that sends an R-algebra to its underlying R-module (forgetting the multiplicative structure). Given a module M over a commutative ring R, the direct sum of modules has a structure of an R-algebra by thinking M consists of infinitesimal elements; i.e., the multiplication is given as . The notion is sometimes called the algebra of dual numbers. A quasi-free algebra, introduced by Cuntz and Quillen, is a sort of generalization of a free algebra and a semisimple algebra over an algebraically closed field. Representation theory The universal enveloping algebra of a Lie algebra is an associative algebra that can be used to study the given Lie algebra. If G is a group and R is a commutative ring, the set of all functions from G to R with finite support form an R-algebra with the convolution as multiplication. It is called the group algebra of G. The construction is the starting point for the application to the study of (discrete) groups. If G is an algebraic group (e.g., semisimple complex Lie group), then the coordinate ring of G is the Hopf algebra A corresponding to G. Many structures of G translate to those of A. A quiver algebra (or a path algebra) of a directed graph is the free associative algebra over a field generated by the paths in the graph. Analysis Given any Banach space X, the continuous linear operators form an associative algebra (using composition of operators as multiplication); this is a Banach algebra. Given any topological space X, the continuous real- or complex-valued functions on X form a real or complex associative algebra; here the functions are added and multiplied pointwise. The set of semimartingales defined on the filtered probability space forms a ring under stochastic integration. The Weyl algebra An Azumaya algebra Geometry and combinatorics The Clifford algebras, which are useful in geometry and physics. Incidence algebras of locally finite partially ordered sets are associative algebras considered in combinatorics. The partition algebra and its subalgebras, including the Brauer algebra and the Temperley-Lieb algebra. A differential graded algebra is an associative algebra together with a grading and a differential. For example, the de Rham algebra , where consists of differential p-forms on a manifold M, is a differential graded algebra. Mathematical physics A Poisson algebra is a commutative associative algebra over a field together with a structure of a Lie algebra so that the Lie bracket satisfies the Leibniz rule; i.e., . Given a Poisson algebra , consider the vector space of formal power series over . If has a structure of an associative algebra with multiplication such that, for , then is called a deformation quantization of . A quantized enveloping algebra. The dual of such an algebra turns out to be an associative algebra (see ) and is, philosophically speaking, the (quantized) coordinate ring of a quantum group. Gerstenhaber algebra Constructions Subalgebras A subalgebra of an R-algebra A is a subset of A which is both a subring and a submodule of A. That is, it must be closed under addition, ring multiplication, scalar multiplication, and it must contain the identity element of A. Quotient algebras Let A be an R-algebra. Any ring-theoretic ideal I in A is automatically an R-module since . This gives the quotient ring the structure of an R-module and, in fact, an R-algebra. It follows that any ring homomorphic image of A is also an R-algebra. Direct products The direct product of a family of R-algebras is the ring-theoretic direct product. This becomes an R-algebra with the obvious scalar multiplication. Free products One can form a free product of R-algebras in a manner similar to the free product of groups. The free product is the coproduct in the category of R-algebras. Tensor products The tensor product of two R-algebras is also an R-algebra in a natural way. See tensor product of algebras for more details. Given a commutative ring R and any ring A the tensor product R ⊗Z A can be given the structure of an R-algebra by defining . The functor which sends A to is left adjoint to the functor which sends an R-algebra to its underlying ring (forgetting the module structure). See also: Change of rings. Free algebra A free algebra is an algebra generated by symbols. If one imposes commutativity; i.e., take the quotient by commutators, then one gets a polynomial algebra. Dual of an associative algebra Let A be an associative algebra over a commutative ring R. Since A is in particular a module, we can take the dual module A* of A. A priori, the dual A* need not have a structure of an associative algebra. However, A may come with an extra structure (namely, that of a Hopf algebra) so that the dual is also an associative algebra. For example, take A to be the ring of continuous functions on a compact group G. Then, not only A is an associative algebra, but it also comes with the co-multiplication and co-unit . The "co-" refers to the fact that they satisfy the dual of the usual multiplication and unit in the algebra axiom. Hence, the dual A* is an associative algebra. The co-multiplication and co-unit are also important in order to form a tensor product of representations of associative algebras (see below). Enveloping algebra Given an associative algebra A over a commutative ring R, the enveloping algebra Ae of A is the algebra or , depending on authors. Note that a bimodule over A is exactly a left module over Ae. Separable algebra Let A be an algebra over a commutative ring R. Then the algebra A is a right module over with the action . Then, by definition, A is said to separable if the multiplication map splits as an Ae-linear map, where is an Ae-module by . Equivalently, A is separable if it is a projective module over ; thus, the -projective dimension of A, sometimes called the bidimension of A, measures the failure of separability. Finite-dimensional algebra Let A be a finite-dimensional algebra over a field k. Then A is an Artinian ring. Commutative case As A is Artinian, if it is commutative, then it is a finite product of Artinian local rings whose residue fields are algebras over the base field k. Now, a reduced Artinian local ring is a field and thus the following are equivalent is separable. is reduced, where is some algebraic closure of k. for some n. is the number of -algebra homomorphisms . Let , the profinite group of finite Galois extensions of k. Then is an anti-equivalence of the category of finite-dimensional separable k-algebras to the category of finite sets with continuous -actions. Noncommutative case Since a simple Artinian ring is a (full) matrix ring over a division ring, if A is a simple algebra, then A is a (full) matrix algebra over a division algebra D over k; i.e., . More generally, if A is a semisimple algebra, then it is a finite product of matrix algebras (over various division k-algebras), the fact known as the Artin–Wedderburn theorem. The fact that A is Artinian simplifies the notion of a Jacobson radical; for an Artinian ring, the Jacobson radical of A is the intersection of all (two-sided) maximal ideals (in contrast, in general, a Jacobson radical is the intersection of all left maximal ideals or the intersection of all right maximal ideals.) The Wedderburn principal theorem states: for a finite-dimensional algebra A with a nilpotent ideal I, if the projective dimension of as a module over the enveloping algebra is at most one, then the natural surjection splits; i.e., A contains a subalgebra B such that is an isomorphism. Taking I to be the Jacobson radical, the theorem says in particular that the Jacobson radical is complemented by a semisimple algebra. The theorem is an analog of Levi's theorem for Lie algebras. Lattices and orders Let R be a Noetherian integral domain with field of fractions K (for example, they can be Z, Q). A lattice L in a finite-dimensional K-vector space V is a finitely generated R-submodule of V that spans V; in other words, . Let AK be a finite-dimensional K-algebra. An order in AK is an R-subalgebra that is a lattice. In general, there are a lot fewer orders than lattices; e.g., Z is a lattice in Q but not an order (since it is not an algebra). A maximal order is an order that is maximal among all the orders. Related concepts Coalgebras An associative algebra over K is given by a K-vector space A endowed with a bilinear map having two inputs (multiplicator and multiplicand) and one output (product), as well as a morphism identifying the scalar multiples of the multiplicative identity. If the bilinear map is reinterpreted as a linear map (i.e., morphism in the category of K-vector spaces) (by the universal property of the tensor product), then we can view an associative algebra over K as a K-vector space A endowed with two morphisms (one of the form and one of the form ) satisfying certain conditions that boil down to the algebra axioms. These two morphisms can be dualized using categorial duality by reversing all arrows in the commutative diagrams that describe the algebra axioms; this defines the structure of a coalgebra. There is also an abstract notion of F-coalgebra, where F is a functor. This is vaguely related to the notion of coalgebra discussed above. Representations A representation of an algebra A is an algebra homomorphism from A to the endomorphism algebra of some vector space (or module) V. The property of ρ being an algebra homomorphism means that ρ preserves the multiplicative operation (that is, for all x and y in A), and that ρ sends the unit of A to the unit of End(V) (that is, to the identity endomorphism of V). If A and B are two algebras, and and are two representations, then there is a (canonical) representation of the tensor product algebra on the vector space . However, there is no natural way of defining a tensor product of two representations of a single associative algebra in such a way that the result is still a representation of that same algebra (not of its tensor product with itself), without somehow imposing additional conditions. Here, by tensor product of representations, the usual meaning is intended: the result should be a linear representation of the same algebra on the product vector space. Imposing such additional structure typically leads to the idea of a Hopf algebra or a Lie algebra, as demonstrated below. Motivation for a Hopf algebra Consider, for example, two representations and . One might try to form a tensor product representation according to how it acts on the product vector space, so that However, such a map would not be linear, since one would have for . One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism , and defining the tensor product representation as Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups). Motivation for a Lie algebra One can try to be more clever in defining a tensor product. Consider, for example, so that the action on the tensor product space is given by . This map is clearly linear in x, and so it does not have the problem of the earlier definition. However, it fails to preserve multiplication: . But, in general, this does not equal . This shows that this definition of a tensor product is too naive; the obvious fix is to define it such that it is antisymmetric, so that the middle two terms cancel. This leads to the concept of a Lie algebra. Non-unital algebras Some authors use the term "associative algebra" to refer to structures which do not necessarily have a multiplicative identity, and hence consider homomorphisms which are not necessarily unital. One example of a non-unital associative algebra is given by the set of all functions whose limit as x nears infinity is zero. Another example is the vector space of continuous periodic functions, together with the convolution product. See also Abstract algebra Algebraic structure Algebra over a field Sheaf of algebras, a sort of an algebra over a ringed space Deligne's conjecture on Hochschild cohomology Notes Citations References James Byrnie Shaw (1907) A Synopsis of Linear Associative Algebra, link from Cornell University Historical Math Monographs. Ross Street (1998) Quantum Groups: an entrée to modern algebra, an overview of index-free notation. Algebras Algebraic geometry
Associative algebra
[ "Mathematics" ]
4,551
[ "Mathematical structures", "Algebras", "Fields of abstract algebra", "Algebraic structures", "Algebraic geometry" ]
2,268
https://en.wikipedia.org/wiki/Chemistry%20of%20ascorbic%20acid
Ascorbic acid is an organic compound with formula , originally called hexuronic acid. It is a white solid, but impure samples can appear yellowish. It dissolves freely in water to give mildly acidic solutions. It is a mild reducing agent. Ascorbic acid exists as two enantiomers (mirror-image isomers), commonly denoted "" (for "levo") and "" (for "dextro"). The isomer is the one most often encountered: it occurs naturally in many foods, and is one form ("vitamer") of vitamin C, an essential nutrient for humans and many animals. Deficiency of vitamin C causes scurvy, formerly a major disease of sailors in long sea voyages. It is used as a food additive and a dietary supplement for its antioxidant properties. The "" form (erythorbic acid) can be made by chemical synthesis, but has no significant biological role. History The antiscorbutic properties of certain foods were demonstrated in the 18th century by James Lind. In 1907, Axel Holst and Theodor Frølich discovered that the antiscorbutic factor was a water-soluble chemical substance, distinct from the one that prevented beriberi. Between 1928 and 1932, Albert Szent-Györgyi isolated a candidate for this substance, which he called "hexuronic acid", first from plants and later from animal adrenal glands. In 1932 Charles Glen King confirmed that it was indeed the antiscorbutic factor. In 1933, sugar chemist Walter Norman Haworth, working with samples of "hexuronic acid" that Szent-Györgyi had isolated from paprika and sent him in the previous year, deduced the correct structure and optical-isomeric nature of the compound, and in 1934 reported its first synthesis. In reference to the compound's antiscorbutic properties, Haworth and Szent-Györgyi proposed to rename it "a-scorbic acid" for the compound, and later specifically -ascorbic acid. Because of their work, in 1937 two Nobel Prizes: in Chemistry and in Physiology or Medicine were awarded to Haworth and Szent-Györgyi, respectively. Chemical properties Acidity Ascorbic acid is a furan-based lactone of 2-ketogluconic acid. It contains an adjacent enediol adjacent to the carbonyl. This −C(OH)=C(OH)−C(=O)− structural pattern is characteristic of reductones, and increases the acidity of one of the enol hydroxyl groups. The deprotonated conjugate base is the ascorbate anion, which is stabilized by electron delocalization that results from resonance between two forms: For this reason, ascorbic acid is much more acidic than would be expected if the compound contained only isolated hydroxyl groups. Salts The ascorbate anion forms salts, such as sodium ascorbate, calcium ascorbate, and potassium ascorbate. Esters Ascorbic acid can also react with organic acids as an alcohol forming esters such as ascorbyl palmitate and ascorbyl stearate. Nucleophilic attack Nucleophilic attack of ascorbic acid on a proton results in a 1,3-diketone: Oxidation The ascorbate ion is the predominant species at typical biological pH values. It is a mild reducing agent and antioxidant, typically reacting with oxidants of the reactive oxygen species, such as the hydroxyl radical. Reactive oxygen species are damaging to animals and plants at the molecular level due to their possible interaction with nucleic acids, proteins, and lipids. Sometimes these radicals initiate chain reactions. Ascorbate can terminate these chain radical reactions by electron transfer. The oxidized forms of ascorbate are relatively unreactive and do not cause cellular damage. Ascorbic acid and its sodium, potassium, and calcium salts are commonly used as antioxidant food additives. These compounds are water-soluble and, thus, cannot protect fats from oxidation: For this purpose, the fat-soluble esters of ascorbic acid with long-chain fatty acids (ascorbyl palmitate or ascorbyl stearate) can be used as antioxidant food additives. Sodium-dependent active transport process enables absorption of Ascorbic acid from the intestine. Ascorbate readily donates a hydrogen atom to free radicals, forming the radical anion semidehydroascorbate (also known as monodehydroascorbate), a resonance-stabilized semitrione: Loss of an electron from semidehydroascorbate to produce the 1,2,3-tricarbonyl pseudodehydroascorbate is thermodynamically disfavored, which helps prevent propagation of free radical chain reactions such as autoxidation: However, being a good electron donor, excess ascorbate in the presence of free metal ions can not only promote but also initiate free radical reactions, thus making it a potentially dangerous pro-oxidative compound in certain metabolic contexts. Semidehydroascorbate oxidation instead occurs in conjunction with hydration, yielding the bicyclic hemiketal dehydroascorbate. In particular, semidehydroascorbate undergoes disproportionation to ascorbate and dehydroascorbate: Aqueous solutions of dehydroascorbate are unstable, undergoing hydrolysis with a half-life of 5–15 minutes at . Decomposition products include diketogulonic acid, xylonic acid, threonic acid and oxalic acid. Other reactions It creates volatile compounds when mixed with glucose and amino acids at 90 °C. It is a cofactor in tyrosine oxidation, though because a crude extract of animal liver is used, it is unclear which reaction catalyzed by which enzyme is being helped here. For known roles in enzymatic reactions, see . Because it reduces iron(III) and chelates iron ions, it enhances the oral absorption of non-heme iron. This property also applies to its enantiomer. Conversion to oxalate In 1958, it was discovered that ascorbic acid can be converted to oxalate, a key component of calcium oxalate kidney stones. The process begins with the formation of dehydroascorbic acid (DHA) from the ascorbyl radical. While DHA can be recycled back to ascorbic acid, a portion irreversibly degrades to 2,3-diketogulonic acid (DKG), which then breaks down to both oxalate and the sugars L-erythrulose and threosone. Research conducted in the 1960s suggested ascorbic acid could substantially contribute to urinary oxalate content (possibly over 40%), but these estimates have been questioned due to methodological limitations. Subsequent large cohort studies have yielded conflicting results regarding the link between vitamin C intake and kidney stone formation. The overall clinical significance of ascorbic acid consumption to kidney stone risk, however, remains inconclusive, although several studies have suggested a potential association, especially with high-dose supplementation in men. Uses Food additive The main use of -ascorbic acid and its salts is as food additives, mostly to combat oxidation and prevent discoloration of the product during storage. It is approved for this purpose in the EU with E number E300, the US, Australia, and New Zealand. The "" enantiomer (erythorbic acid) shares all of the non-biological chemical properties with the more common enantiomer. As a result, it is an equally effective food antioxidant, and is also approved in processed foods. Dietary supplement Another major use of -ascorbic acid is as a dietary supplement. It is on the World Health Organization's List of Essential Medicines. It's deficiency over a prolonged period of time could cause scurvy, which is characterized by fatigue, widespread weakness in connective tissues and capillary fragility. It affects multiple organ systems due to its role in the biochemical reactions of connective tissue synthesis. Niche, non-food uses Ascorbic acid is easily oxidized and so is used as a reductant in photographic developer solutions (among others) and as a preservative. In fluorescence microscopy and related fluorescence-based techniques, ascorbic acid can be used as an antioxidant to increase fluorescent signal and chemically retard dye photobleaching. It is also commonly used to remove dissolved metal stains, such as iron, from fiberglass swimming pool surfaces. In plastic manufacturing, ascorbic acid can be used to assemble molecular chains more quickly and with less waste than traditional synthesis methods. Heroin users are known to use ascorbic acid as a means to convert heroin base to a water-soluble salt so that it can be injected. As justified by its reaction with iodine, it is used to negate the effects of iodine tablets in water purification. It reacts with the sterilized water, removing the taste, color, and smell of the iodine. This is why it is often sold as a second set of tablets in most sporting goods stores as Potable Aqua-Neutralizing Tablets, along with the potassium iodide tablets. Intravenous high-dose ascorbate is being used as a chemotherapeutic and biological response modifying agent. It is undergoing clinical trials. It is sometimes used as a urinary acidifier to enhance the antiseptic effect of methenamine. Synthesis Natural biosynthesis of vitamin C occurs through various processes in many plants and animals. Industrial preparation Seventy percent of the world's supply of ascorbic acid is produced in China. Ascorbic acid is prepared in industry from glucose in a method based on the historical Reichstein process. In the first of a five-step process, glucose is catalytically hydrogenated to sorbitol, which is then oxidized by the microorganism Acetobacter suboxydans to sorbose. Only one of the six hydroxy groups is oxidized by this enzymatic reaction. From this point, two routes are available. Treatment of the product with acetone in the presence of an acid catalyst converts four of the remaining hydroxyl groups to acetals. The unprotected hydroxyl group is oxidized to the carboxylic acid by reaction with the catalytic oxidant TEMPO (regenerated by sodium hypochlorite bleaching solution). Historically, industrial preparation via the Reichstein process used potassium permanganate as the bleaching solution. Acid-catalyzed hydrolysis of this product performs the dual function of removing the two acetal groups and ring-closing lactonization. This step yields ascorbic acid. Each of the five steps has a yield larger than 90%. A biotechnological process, first developed in China in the 1960s but further developed in the 1990s, bypassing acetone-protecting groups. A second genetically modified microbe species, such as mutant Erwinia, among others, oxidises sorbose into 2-ketogluconic acid (2-KGA), which can then undergo ring-closing lactonization via dehydration. This method is used in the predominant process used by the ascorbic acid industry in China, which supplies 70% of the world's ascorbic acid. Researchers are exploring means for one-step fermentation. Determination The traditional way to analyze the ascorbic acid content is by titration with an oxidizing agent, and several procedures have been developed. The popular iodometry approach uses iodine in the presence of a starch indicator. Iodine is reduced by ascorbic acid, and when all the ascorbic acid has reacted, the iodine is in excess, forming a blue-black complex with the starch indicator. This indicates the end-point of the titration. As an alternative, ascorbic acid can be treated with iodine in excess, followed by back titration with sodium thiosulfate using starch as an indicator. This iodometric method has been revised to exploit the reaction of ascorbic acid with iodate and iodide in acid solution. Electrolyzing the potassium iodide solution produces iodine, which reacts with ascorbic acid. The end of the process is determined by potentiometric titration like Karl Fischer titration. The amount of ascorbic acid can be calculated by Faraday's law. Another alternative uses N-bromosuccinimide (NBS) as the oxidizing agent in the presence of potassium iodide and starch. The NBS first oxidizes the ascorbic acid; when the latter is exhausted, the NBS liberates the iodine from the potassium iodide, which then forms the blue-black complex with starch. See also Colour retention agent Erythorbic acid: a diastereomer of ascorbic acid. Mineral ascorbates: salts of ascorbic acid Acids in wine References Further reading External links IPCS Poisons Information Monograph (PIM) 046 Interactive 3D-structure of vitamin C with details on the x-ray structure Organic acids Antioxidants Dietary antioxidants Coenzymes Corrosion inhibitors Furanones Vitamers Vitamin C Biomolecules 3-Hydroxypropenals
Chemistry of ascorbic acid
[ "Chemistry", "Biology" ]
2,857
[ "Organic acids", "Natural products", "Acids", "Biochemistry", "Coenzymes", "Organic compounds", "Structural biology", "Biomolecules", "Corrosion inhibitors", "Process chemicals", "Molecular biology" ]
2,308
https://en.wikipedia.org/wiki/Actinide
The actinide () or actinoid () series encompasses at least the 14 metallic chemical elements in the 5f series, with atomic numbers from 89 to 102, actinium through nobelium. Number 103, lawrencium, is also generally included despite being part of the 6d transition series. The actinide series derives its name from the first element in the series, actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide. The 1985 IUPAC Red Book recommends that actinoid be used rather than actinide, since the suffix -ide normally indicates a negative ion. However, owing to widespread current use, actinide is still allowed. Since actinoid literally means actinium-like (cf. humanoid or android), it has been argued for semantic reasons that actinium cannot logically be an actinoid, but IUPAC acknowledges its inclusion based on common usage. Actinium through nobelium are f-block elements, while lawrencium is a d-block element and a transition metal. The series mostly corresponds to the filling of the 5f electron shell, although as isolated atoms in the ground state many have anomalous configurations involving the filling of the 6d shell due to interelectronic repulsion. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. They all have very large atomic and ionic radii and exhibit an unusually large range of physical properties. While actinium and the late actinides (from curium onwards) behave similarly to the lanthanides, the elements thorium, protactinium, and uranium are much more similar to transition metals in their chemistry, with neptunium, plutonium, and americium occupying an intermediate position. All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These have been used in nuclear reactors, and uranium and plutonium are critical elements of nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors. Of the actinides, primordial thorium and uranium occur naturally in substantial quantities. The radioactive decay of uranium produces transient amounts of actinium and protactinium, and atoms of neptunium and plutonium are occasionally produced from transmutation reactions in uranium ores. The other actinides are purely synthetic elements. Nuclear weapons tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium. In presentations of the periodic table, the f-block elements are customarily shown as two additional rows below the main body of the table. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods). Actinides Discovery, isolation and synthesis Like the lanthanides, the actinides form a family of elements with similar properties. Within the actinides, there are two overlapping groups: transuranium elements, which follow uranium in the periodic table; and transplutonium elements, which follow plutonium. Compared to the lanthanides, which (except for promethium) are found in nature in appreciable quantities, most actinides are rare. Most do not occur in nature, and of those that do, only thorium and uranium do so in more than trace quantities. The most abundant or easily synthesized actinides are uranium and thorium, followed by plutonium, americium, actinium, protactinium, neptunium, and curium. The existence of transuranium elements was suggested in 1934 by Enrico Fermi, based on his experiments. However, even though four actinides were known by that time, it was not yet understood that they formed a family similar to lanthanides. The prevailing view that dominated early research into transuranics was that they were regular elements in the 7th period, with thorium, protactinium and uranium corresponding to 6th-period hafnium, tantalum and tungsten, respectively. Synthesis of transuranics gradually undermined this point of view. By 1944, an observation that curium failed to exhibit oxidation states above 4 (whereas its supposed 6th period homolog, platinum, can reach oxidation state of 6) prompted Glenn Seaborg to formulate an "actinide hypothesis". Studies of known actinides and discoveries of further transuranic elements provided more data in support of this position, but the phrase "actinide hypothesis" (the implication being that a "hypothesis" is something that has not been decisively proven) remained in active use by scientists through the late 1950s. At present, there are two major methods of producing isotopes of transplutonium elements: (1) irradiation of the lighter elements with neutrons; (2) irradiation with accelerated charged particles. The first method is more important for applications, as only neutron irradiation using nuclear reactors allows the production of sizeable amounts of synthetic actinides; however, it is limited to relatively light elements. The advantage of the second method is that elements heavier than plutonium, as well as neutron-deficient isotopes, can be obtained, which are not formed during neutron irradiation. In 1962–1966, there were attempts in the United States to produce transplutonium isotopes using a series of six underground nuclear explosions. Small samples of rock were extracted from the blast area immediately after the test to study the explosion products, but no isotopes with mass number greater than 257 could be detected, despite predictions that such isotopes would have relatively long half-lives of α-decay. This non-observation was attributed to spontaneous fission owing to the large speed of the products and to other decay channels, such as neutron emission and nuclear fission. From actinium to uranium Uranium and thorium were the first actinides discovered. Uranium was identified in 1789 by the German chemist Martin Heinrich Klaproth in pitchblende ore. He named it after the planet Uranus, which had been discovered eight years earlier. Klaproth was able to precipitate a yellow compound (likely sodium diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. He then reduced the obtained yellow powder with charcoal, and extracted a black substance that he mistook for metal. Sixty years later, the French scientist Eugène-Melchior Péligot identified it as uranium oxide. He also isolated the first sample of uranium metal by heating uranium tetrachloride with metallic potassium. The atomic mass of uranium was then calculated as 120, but Dmitri Mendeleev in 1872 corrected it to 240 using his periodicity laws. This value was confirmed experimentally in 1882 by K. Zimmerman. Thorium oxide was discovered by Friedrich Wöhler in the mineral thorianite, which was found in Norway (1827). Jöns Jacob Berzelius characterized this material in more detail in 1828. By reduction of thorium tetrachloride with potassium, he isolated the metal and named it thorium after the Norse god of thunder and lightning Thor. The same isolation method was later used by Péligot for uranium. Actinium was discovered in 1899 by André-Louis Debierne, an assistant of Marie Curie, in the pitchblende waste left after removal of radium and polonium. He described the substance (in 1899) as similar to titanium and (in 1900) as similar to thorium. The discovery of actinium by Debierne was however questioned in 1971 and 2000, arguing that Debierne's publications in 1904 contradicted his earlier work of 1899–1900. This view instead credits the 1902 work of Friedrich Oskar Giesel, who discovered a radioactive element named emanium that behaved similarly to lanthanum. The name actinium comes from the , meaning beam or ray. This metal was discovered not by its own radiation but by the radiation of the daughter products. Owing to the close similarity of actinium and lanthanum and low abundance, pure actinium could only be produced in 1950. The term actinide was probably introduced by Victor Goldschmidt in 1937. Protactinium was possibly isolated in 1900 by William Crookes. It was first identified in 1913, when Kasimir Fajans and Oswald Helmuth Göhring encountered the short-lived isotope 234mPa (half-life 1.17 minutes) during their studies of the 238U decay chain. They named the new element brevium (from Latin brevis meaning brief); the name was changed to protoactinium (from Greek πρῶτος + ἀκτίς meaning "first beam element") in 1918 when two groups of scientists, led by the Austrian Lise Meitner and Otto Hahn of Germany and Frederick Soddy and John Arnold Cranston of Great Britain, independently discovered the much longer-lived 231Pa. The name was shortened to protactinium in 1949. This element was little characterized until 1960, when Alfred Maddock and his co-workers in the U.K. isolated 130 grams of protactinium from 60 tonnes of waste left after extraction of uranium from its ore. Neptunium and above Neptunium (named for the planet Neptune, the next planet out from Uranus, after which uranium was named) was discovered by Edwin McMillan and Philip H. Abelson in 1940 in Berkeley, California. They produced the 239Np isotope (half-life 2.4 days) by bombarding uranium with slow neutrons. It was the first transuranium element produced synthetically. Transuranium elements do not occur in sizeable quantities in nature and are commonly synthesized via nuclear reactions conducted with nuclear reactors. For example, under irradiation with reactor neutrons, uranium-238 partially converts to plutonium-239: This synthesis reaction was used by Fermi and his collaborators in their design of the reactors located at the Hanford Site, which produced significant amounts of plutonium-239 for the nuclear weapons of the Manhattan Project and the United States' post-war nuclear arsenal. Actinides with the highest mass numbers are synthesized by bombarding uranium, plutonium, curium and californium with ions of nitrogen, oxygen, carbon, neon or boron in a particle accelerator. Thus nobelium was produced by bombarding uranium-238 with neon-22 as _{92}^{238}U + _{10}^{22}Ne -> _{102}^{256}No + 4_0^1n. The first isotopes of transplutonium elements, americium-241 and curium-242, were synthesized in 1944 by Glenn T. Seaborg, Ralph A. James and Albert Ghiorso. Curium-242 was obtained by bombarding plutonium-239 with 32-MeV α-particles: _{94}^{239}Pu + _2^4He -> _{96}^{242}Cm + _0^1n. The americium-241 and curium-242 isotopes also were produced by irradiating plutonium in a nuclear reactor. The latter element was named after Marie Curie and her husband Pierre who are noted for discovering radium and for their work in radioactivity. Bombarding curium-242 with α-particles resulted in an isotope of californium 245Cf in 1950, and a similar procedure yielded berkelium-243 from americium-241 in 1949. The new elements were named after Berkeley, California, by analogy with its lanthanide homologue terbium, which was named after the village of Ytterby in Sweden. In 1945, B. B. Cunningham obtained the first bulk chemical compound of a transplutonium element, namely americium hydroxide. Over the few years, milligram quantities of americium and microgram amounts of curium were accumulated that allowed production of isotopes of berkelium and californium. Sizeable amounts of these elements were produced in 1958, and the first californium compound (0.3 μg of CfOCl) was obtained in 1960 by B. B. Cunningham and J. C. Wallmann. Einsteinium and fermium were identified in 1952–1953 in the fallout from the "Ivy Mike" nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Instantaneous exposure of uranium-238 to a large neutron flux resulting from the explosion produced heavy isotopes of uranium, which underwent a series of beta decays to nuclides such as einsteinium-253 and fermium-255. The discovery of the new elements and the new data on neutron capture were initially kept secret on the orders of the US military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team were able to prepare einsteinium and fermium by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on those elements. The "Ivy Mike" studies were declassified and published in 1955. The first significant (submicrogram) amounts of einsteinium were produced in 1961 by Cunningham and colleagues, but this has not been done for fermium yet. The first isotope of mendelevium, 256Md (half-life 87 min), was synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory Robert Choppin, Bernard G. Harvey and Stanley Gerald Thompson when they bombarded an 253Es target with alpha particles in the 60-inch cyclotron of Berkeley Radiation Laboratory; this was the first isotope of any element to be synthesized one atom at a time. There were several attempts to obtain isotopes of nobelium by Swedish (1957) and American (1958) groups, but the first reliable result was the synthesis of 256No by the Russian group of Georgy Flyorov in 1965, as acknowledged by the IUPAC in 1992. In their experiments, Flyorov et al. bombarded uranium-238 with neon-22. In 1961, Ghiorso et al. obtained the first isotope of lawrencium by irradiating californium (mostly californium-252) with boron-10 and boron-11 ions. The mass number of this isotope was not clearly established (possibly 258 or 259) at the time. In 1965, 256Lr was synthesized by Flyorov et al. from 243Am and 18O. Thus IUPAC recognized the nuclear physics teams at Dubna and Berkeley as the co-discoverers of lawrencium. Isotopes Thirty-four isotopes of actinium and eight excited isomeric states of some of its nuclides are known, ranging in mass number from 203 to 236. Three isotopes, 225Ac, 227Ac and 228Ac, were found in nature and the others were produced in the laboratory; only the three natural isotopes are used in applications. Actinium-225 is a member of the radioactive neptunium series; it was first discovered in 1947 as a decay product of uranium-233 and it is an α-emitter with a half-life of 10 days. Actinium-225 is less available than actinium-228, but is more promising in radiotracer applications. Actinium-227 (half-life 21.77 years) occurs in all uranium ores, but in small quantities. One gram of uranium (in radioactive equilibrium) contains only 2 gram of 227Ac. Actinium-228 is a member of the radioactive thorium series formed by the decay of 228Ra; it is a β− emitter with a half-life of 6.15 hours. In one tonne of thorium there is 5 gram of 228Ac. It was discovered by Otto Hahn in 1906. There are 32 known isotopes of thorium ranging in mass number from 207 to 238. Of these, the longest-lived is 232Th, whose half-life of means that it still exists in nature as a primordial nuclide. The next longest-lived is 230Th, an intermediate decay product of 238U with a half-life of 75,400 years. Several other thorium isotopes have half-lives over a day; all of these are also transient in the decay chains of 232Th, 235U, and 238U. Twenty-nine isotopes of protactinium are known with mass numbers 211–239 as well as three excited isomeric states. Only 231Pa and 234Pa have been found in nature. All the isotopes have short lifetimes, except for protactinium-231 (half-life 32,760 years). The most important isotopes are 231Pa and 233Pa, which is an intermediate product in obtaining uranium-233 and is the most affordable among artificial isotopes of protactinium. 233Pa has convenient half-life and energy of γ-radiation, and thus was used in most studies of protactinium chemistry. Protactinium-233 is a β-emitter with a half-life of 26.97 days. There are 27 known isotopes of uranium, having mass numbers 215–242 (except 220). Three of them, 234U, 235U and 238U, are present in appreciable quantities in nature. Among others, the most important is 233U, which is a final product of transformation of 232Th irradiated by slow neutrons. 233U has a much higher fission efficiency by low-energy (thermal) neutrons, compared e.g. with 235U. Most uranium chemistry studies were carried out on uranium-238 owing to its long half-life of 4.4 years. There are 25 isotopes of neptunium with mass numbers 219–244 (except 221); they are all highly radioactive. The most popular among scientists are long-lived 237Np (t1/2 = 2.20 years) and short-lived 239Np, 238Np (t1/2 ~ 2 days). There are 21 known isotopes of plutonium, having mass numbers 227–247. The most stable isotope of plutonium is 244Pu with half-life of 8.13 years. Eighteen isotopes of americium are known with mass numbers from 229 to 247 (with the exception of 231). The most important are 241Am and 243Am, which are alpha-emitters and also emit soft, but intense γ-rays; both of them can be obtained in an isotopically pure form. Chemical properties of americium were first studied with 241Am, but later shifted to 243Am, which is almost 20 times less radioactive. The disadvantage of 243Am is production of the short-lived daughter isotope 239Np, which has to be considered in the data analysis. Among 19 isotopes of curium, ranging in mass number from 233 to 251, the most accessible are 242Cm and 244Cm; they are α-emitters, but with much shorter lifetime than the americium isotopes. These isotopes emit almost no γ-radiation, but undergo spontaneous fission with the associated emission of neutrons. More long-lived isotopes of curium (245–248Cm, all α-emitters) are formed as a mixture during neutron irradiation of plutonium or americium. Upon short irradiation, this mixture is dominated by 246Cm, and then 248Cm begins to accumulate. Both of these isotopes, especially 248Cm, have a longer half-life (3.48 years) and are much more convenient for carrying out chemical research than 242Cm and 244Cm, but they also have a rather high rate of spontaneous fission. 247Cm has the longest lifetime among isotopes of curium (1.56 years), but is not formed in large quantities because of the strong fission induced by thermal neutrons. Seventeen isotopes of berkelium have been identified with mass numbers 233, 234, 236, 238, and 240–252. Only 249Bk is available in large quantities; it has a relatively short half-life of 330 days and emits mostly soft β-particles, which are inconvenient for detection. Its alpha radiation is rather weak (1.45% with respect to β-radiation), but is sometimes used to detect this isotope. 247Bk is an alpha-emitter with a long half-life of 1,380 years, but it is hard to obtain in appreciable quantities; it is not formed upon neutron irradiation of plutonium because β-decay of curium isotopes with mass number below 248 is not known. (247Cm would actually release energy by β-decaying to 247Bk, but this has never been seen.) The 20 isotopes of californium with mass numbers 237–256 are formed in nuclear reactors; californium-253 is a β-emitter and the rest are α-emitters. The isotopes with even mass numbers (250Cf, 252Cf and 254Cf) have a high rate of spontaneous fission, especially 254Cf of which 99.7% decays by spontaneous fission. Californium-249 has a relatively long half-life (352 years), weak spontaneous fission and strong γ-emission that facilitates its identification. 249Cf is not formed in large quantities in a nuclear reactor because of the slow β-decay of the parent isotope 249Bk and a large cross section of interaction with neutrons, but it can be accumulated in the isotopically pure form as the β-decay product of (pre-selected) 249Bk. Californium produced by reactor-irradiation of plutonium mostly consists of 250Cf and 252Cf, the latter being predominant for large neutron fluences, and its study is hindered by the strong neutron radiation. Among the 18 known isotopes of einsteinium with mass numbers from 240 to 257, the most affordable is 253Es. It is an α-emitter with a half-life of 20.47 days, a relatively weak γ-emission and small spontaneous fission rate as compared with the isotopes of californium. Prolonged neutron irradiation also produces a long-lived isotope 254Es (t1/2 = 275.5 days). Twenty isotopes of fermium are known with mass numbers of 241–260. 254Fm, 255Fm and 256Fm are α-emitters with a short half-life (hours), which can be isolated in significant amounts. 257Fm (t1/2 = 100 days) can accumulate upon prolonged and strong irradiation. All these isotopes are characterized by high rates of spontaneous fission. Among the 17 known isotopes of mendelevium (mass numbers from 244 to 260), the most studied is 256Md, which mainly decays through electron capture (α-radiation is ≈10%) with a half-life of 77 minutes. Another alpha emitter, 258Md, has a half-life of 53 days. Both these isotopes are produced from rare einsteinium (253Es and 255Es respectively), that therefore limits their availability. Long-lived isotopes of nobelium and isotopes of lawrencium (and of heavier elements) have relatively short half-lives. For nobelium, 13 isotopes are known, with mass numbers 249–260 and 262. The chemical properties of nobelium and lawrencium were studied with 255No (t1/2 = 3 min) and 256Lr (t1/2 = 35 s). The longest-lived nobelium isotope, 259No, has a half-life of approximately 1 hour. Lawrencium has 14 known isotopes with mass numbers 251–262, 264, and 266. The most stable of them is 266Lr with a half life of 11 hours. Among all of these, the only isotopes that occur in sufficient quantities in nature to be detected in anything more than traces and have a measurable contribution to the atomic weights of the actinides are the primordial 232Th, 235U, and 238U, and three long-lived decay products of natural uranium, 230Th, 231Pa, and 234U. Natural thorium consists of 0.02(2)% 230Th and 99.98(2)% 232Th; natural protactinium consists of 100% 231Pa; and natural uranium consists of 0.0054(5)% 234U, 0.7204(6)% 235U, and 99.2742(10)% 238U. Formation in nuclear reactors The figure buildup of actinides is a table of nuclides with the number of neutrons on the horizontal axis (isotopes) and the number of protons on the vertical axis (elements). The red dot divides the nuclides in two groups, so the figure is more compact. Each nuclide is represented by a square with the mass number of the element and its half-life. Naturally existing actinide isotopes (Th, U) are marked with a bold border, alpha emitters have a yellow colour, and beta emitters have a blue colour. Pink indicates electron capture (236Np), whereas white stands for a long-lasting metastable state (242Am). The formation of actinide nuclides is primarily characterised by: Neutron capture reactions (n,γ), which are represented in the figure by a short right arrow. The (n,2n) reactions and the less frequently occurring (γ,n) reactions are also taken into account, both of which are marked by a short left arrow. Even more rarely and only triggered by fast neutrons, the (n,3n) reaction occurs, which is represented in the figure with one example, marked by a long left arrow. In addition to these neutron- or gamma-induced nuclear reactions, the radioactive conversion of actinide nuclides also affects the nuclide inventory in a reactor. These decay types are marked in the figure by diagonal arrows. The beta-minus decay, marked with an arrow pointing up-left, plays a major role for the balance of the particle densities of the nuclides. Nuclides decaying by positron emission (beta-plus decay) or electron capture (ϵ) do not occur in a nuclear reactor except as products of knockout reactions; their decays are marked with arrows pointing down-right. Due to the long half-lives of the given nuclides, alpha decay plays almost no role in the formation and decay of the actinides in a power reactor, as the residence time of the nuclear fuel in the reactor core is rather short (a few years). Exceptions are the two relatively short-lived nuclides 242Cm (T1/2 = 163 d) and 236Pu (T1/2 = 2.9 y). Only for these two cases, the α decay is marked on the nuclide map by a long arrow pointing down-left. A few long-lived actinide isotopes, such as 244Pu and 250Cm, cannot be produced in reactors because neutron capture does not happen quickly enough to bypass the short-lived beta-decaying nuclides 243Pu and 249Cm; they can however be generated in nuclear explosions, which have much higher neutron fluxes. Distribution in nature Thorium and uranium are the most abundant actinides in nature with the respective mass concentrations of 16 ppm and 4 ppm. Uranium mostly occurs in the Earth's crust as a mixture of its oxides in the mineral uraninite, which is also called pitchblende because of its black color. There are several dozens of other uranium minerals such as carnotite (KUO2VO4·3H2O) and autunite (Ca(UO2)2(PO4)2·nH2O). The isotopic composition of natural uranium is 238U (relative abundance 99.2742%), 235U (0.7204%) and 234U (0.0054%); of these 238U has the largest half-life of 4.51 years. The worldwide production of uranium in 2009 amounted to 50,572 tonnes, of which 27.3% was mined in Kazakhstan. Other important uranium mining countries are Canada (20.1%), Australia (15.7%), Namibia (9.1%), Russia (7.0%), and Niger (6.4%). The most abundant thorium minerals are thorianite (), thorite () and monazite, (). Most thorium minerals contain uranium and vice versa; and they all have significant fraction of lanthanides. Rich deposits of thorium minerals are located in the United States (440,000 tonnes), Australia and India (~300,000 tonnes each) and Canada (~100,000 tonnes). The abundance of actinium in the Earth's crust is only about 5%. Actinium is mostly present in uranium-containing, but also in other minerals, though in much smaller quantities. The content of actinium in most natural objects corresponds to the isotopic equilibrium of parent isotope 235U, and it is not affected by the weak Ac migration. Protactinium is more abundant (10−12%) in the Earth's crust than actinium. It was discovered in uranium ore in 1913 by Fajans and Göhring. As actinium, the distribution of protactinium follows that of 235U. The half-life of the longest-lived isotope of neptunium, 237Np, is negligible compared to the age of the Earth. Thus neptunium is present in nature in negligible amounts produced as intermediate decay products of other isotopes. Traces of plutonium in uranium minerals were first found in 1942, and the more systematic results on 239Pu are summarized in the table (no other plutonium isotopes could be detected in those samples). The upper limit of abundance of the longest-living isotope of plutonium, 244Pu, is 3%. Plutonium could not be detected in samples of lunar soil. Owing to its scarcity in nature, most plutonium is produced synthetically. Extraction Owing to the low abundance of actinides, their extraction is a complex, multistep process. Fluorides of actinides are usually used because they are insoluble in water and can be easily separated with redox reactions. Fluorides are reduced with calcium, magnesium or barium: Among the actinides, thorium and uranium are the easiest to isolate. Thorium is extracted mostly from monazite: thorium pyrophosphate (ThP2O7) is reacted with nitric acid, and the produced thorium nitrate treated with tributyl phosphate. Rare-earth impurities are separated by increasing the pH in sulfate solution. In another extraction method, monazite is decomposed with a 45% aqueous solution of sodium hydroxide at 140 °C. Mixed metal hydroxides are extracted first, filtered at 80 °C, washed with water and dissolved with concentrated hydrochloric acid. Next, the acidic solution is neutralized with hydroxides to pH = 5.8 that results in precipitation of thorium hydroxide (Th(OH)4) contaminated with ~3% of rare-earth hydroxides; the rest of rare-earth hydroxides remains in solution. Thorium hydroxide is dissolved in an inorganic acid and then purified from the rare earth elements. An efficient method is the dissolution of thorium hydroxide in nitric acid, because the resulting solution can be purified by extraction with organic solvents: Th(OH)4 + 4 HNO3 → Th(NO3)4 + 4 H2O Metallic thorium is separated from the anhydrous oxide, chloride or fluoride by reacting it with calcium in an inert atmosphere: ThO2 + 2 Ca → 2 CaO + Th Sometimes thorium is extracted by electrolysis of a fluoride in a mixture of sodium and potassium chloride at 700–800 °C in a graphite crucible. Highly pure thorium can be extracted from its iodide with the crystal bar process. Uranium is extracted from its ores in various ways. In one method, the ore is burned and then reacted with nitric acid to convert uranium into a dissolved state. Treating the solution with a solution of tributyl phosphate (TBP) in kerosene transforms uranium into an organic form UO2(NO3)2(TBP)2. The insoluble impurities are filtered and the uranium is extracted by reaction with hydroxides as (NH4)2U2O7 or with hydrogen peroxide as UO4·2H2O. When the uranium ore is rich in such minerals as dolomite, magnesite, etc., those minerals consume much acid. In this case, the carbonate method is used for uranium extraction. Its main component is an aqueous solution of sodium carbonate, which converts uranium into a complex [UO2(CO3)3]4−, which is stable in aqueous solutions at low concentrations of hydroxide ions. The advantages of the sodium carbonate method are that the chemicals have low corrosivity (compared to nitrates) and that most non-uranium metals precipitate from the solution. The disadvantage is that tetravalent uranium compounds precipitate as well. Therefore, the uranium ore is treated with sodium carbonate at elevated temperature and under oxygen pressure: 2 UO2 + O2 + 6 → 2 [UO2(CO3)3]4− This equation suggests that the best solvent for the uranyl carbonate processing is a mixture of carbonate with bicarbonate. At high pH, this results in precipitation of diuranate, which is treated with hydrogen in the presence of nickel yielding an insoluble uranium tetracarbonate. Another separation method uses polymeric resins as a polyelectrolyte. Ion exchange processes in the resins result in separation of uranium. Uranium from resins is washed with a solution of ammonium nitrate or nitric acid that yields uranyl nitrate, UO2(NO3)2·6H2O. When heated, it turns into UO3, which is converted to UO2 with hydrogen: UO3 + H2 → UO2 + H2O Reacting uranium dioxide with hydrofluoric acid changes it to uranium tetrafluoride, which yields uranium metal upon reaction with magnesium metal: 4 HF + UO2 → UF4 + 2 H2O To extract plutonium, neutron-irradiated uranium is dissolved in nitric acid, and a reducing agent (FeSO4, or H2O2) is added to the resulting solution. This addition changes the oxidation state of plutonium from +6 to +4, while uranium remains in the form of uranyl nitrate (UO2(NO3)2). The solution is treated with a reducing agent and neutralized with ammonium carbonate to pH = 8 that results in precipitation of Pu4+ compounds. In another method, Pu4+ and are first extracted with tributyl phosphate, then reacted with hydrazine washing out the recovered plutonium. The major difficulty in separation of actinium is the similarity of its properties with those of lanthanum. Thus actinium is either synthesized in nuclear reactions from isotopes of radium or separated using ion-exchange procedures. Properties Actinides have similar properties to lanthanides. Just as the 4f electron shells are filled in the lanthanides, the 5f electron shells are filled in the actinides. Because the 5f, 6d, 7s, and 7p shells are close in energy, many irregular configurations arise; thus, in gas-phase atoms, just as the first 4f electron only appears in cerium, so the first 5f electron appears even later, in protactinium. However, just as lanthanum is the first element to use the 4f shell in compounds, so actinium is the first element to use the 5f shell in compounds. The f-shells complete their filling together, at ytterbium and nobelium. The first experimental evidence for the filling of the 5f shell in actinides was obtained by McMillan and Abelson in 1940. As in lanthanides (see lanthanide contraction), the ionic radius of actinides monotonically decreases with atomic number (see also actinoid contraction). The shift of electron configurations in the gas phase does not always match the chemical behaviour. For example, the early-transition-metal-like prominence of the highest oxidation state, corresponding to removal of all valence electrons, extends up to uranium even though the 5f shells begin filling before that. On the other hand, electron configurations resembling the lanthanide congeners already begin at plutonium, even though lanthanide-like behaviour does not become dominant until the second half of the series begins at curium. The elements between uranium and curium form a transition between these two kinds of behaviour, where higher oxidation states continue to exist, but lose stability with respect to the +3 state. The +2 state becomes more important near the end of the series, and is the most stable oxidation state for nobelium, the last 5f element. Oxidation states rise again only after nobelium, showing that a new series of 6d transition metals has begun: lawrencium shows only the +3 oxidation state, and rutherfordium only the +4 state, making them respectively congeners of lutetium and hafnium in the 5d row. Physical properties Actinides are typical metals. All of them are soft and have a silvery color (but tarnish in air), relatively high density and plasticity. Some of them can be cut with a knife. Their electrical resistivity varies between 15 and 150 μΩ·cm. The hardness of thorium is similar to that of soft steel, so heated pure thorium can be rolled in sheets and pulled into wire. Thorium is nearly half as dense as uranium and plutonium, but is harder than either of them. All actinides are radioactive, paramagnetic, and, with the exception of actinium, have several crystalline phases: plutonium has seven, and uranium, neptunium and californium three. The crystal structures of protactinium, uranium, neptunium and plutonium do not have clear analogs among the lanthanides and are more similar to those of the 3d-transition metals. All actinides are pyrophoric, especially when finely divided, that is, they spontaneously ignite upon reaction with air at room temperature. The melting point of actinides does not have a clear dependence on the number of f-electrons. The unusually low melting point of neptunium and plutonium (~640 °C) is explained by hybridization of 5f and 6d orbitals and the formation of directional bonds in these metals. Chemical properties Like the lanthanides, all actinides are highly reactive with halogens and chalcogens; however, the actinides react more easily. Actinides, especially those with a small number of 5f-electrons, are prone to hybridization. This is explained by the similarity of the electron energies at the 5f, 7s and 6d shells. Most actinides exhibit a larger variety of valence states, and the most stable are +6 for uranium, +5 for protactinium and neptunium, +4 for thorium and plutonium and +3 for actinium and other actinides. Actinium is chemically similar to lanthanum, which is explained by their similar ionic radii and electronic structures. Like lanthanum, actinium almost always has an oxidation state of +3 in compounds, but it is less reactive and has more pronounced basic properties. Among other trivalent actinides Ac3+ is least acidic, i.e. has the weakest tendency to hydrolyze in aqueous solutions. Thorium is rather active chemically. Owing to lack of electrons on 6d and 5f orbitals, tetravalent thorium compounds are colorless. At pH < 3, solutions of thorium salts are dominated by the cations [Th(H2O)8]4+. The Th4+ ion is relatively large, and depending on the coordination number can have a radius between 0.95 and 1.14 Å. As a result, thorium salts have a weak tendency to hydrolyse. The distinctive ability of thorium salts is their high solubility both in water and polar organic solvents. Protactinium exhibits two valence states; the +5 is stable, and the +4 state easily oxidizes to protactinium(V). Thus tetravalent protactinium in solutions is obtained by the action of strong reducing agents in a hydrogen atmosphere. Tetravalent protactinium is chemically similar to uranium(IV) and thorium(IV). Fluorides, phosphates, hypophosphates, iodates and phenylarsonates of protactinium(IV) are insoluble in water and dilute acids. Protactinium forms soluble carbonates. The hydrolytic properties of pentavalent protactinium are close to those of tantalum(V) and niobium(V). The complex chemical behavior of protactinium is a consequence of the start of the filling of the 5f shell in this element. Uranium has a valence from 3 to 6, the last being most stable. In the hexavalent state, uranium is very similar to the group 6 elements. Many compounds of uranium(IV) and uranium(VI) are non-stoichiometric, i.e. have variable composition. For example, the actual chemical formula of uranium dioxide is UO2+x, where x varies between −0.4 and 0.32. Uranium(VI) compounds are weak oxidants. Most of them contain the linear "uranyl" group, . Between 4 and 6 ligands can be accommodated in an equatorial plane perpendicular to the uranyl group. The uranyl group acts as a hard acid and forms stronger complexes with oxygen-donor ligands than with nitrogen-donor ligands. and are also the common form of Np and Pu in the +6 oxidation state. Uranium(IV) compounds exhibit reducing properties, e.g., they are easily oxidized by atmospheric oxygen. Uranium(III) is a very strong reducing agent. Owing to the presence of d-shell, uranium (as well as many other actinides) forms organometallic compounds, such as UIII(C5H5)3 and UIV(C5H5)4. Neptunium has valence states from 3 to 7, which can be simultaneously observed in solutions. The most stable state in solution is +5, but the valence +4 is preferred in solid neptunium compounds. Neptunium metal is very reactive. Ions of neptunium are prone to hydrolysis and formation of coordination compounds. Plutonium also exhibits valence states between 3 and 7 inclusive, and thus is chemically similar to neptunium and uranium. It is highly reactive, and quickly forms an oxide film in air. Plutonium reacts with hydrogen even at temperatures as low as 25–50 °C; it also easily forms halides and intermetallic compounds. Hydrolysis reactions of plutonium ions of different oxidation states are quite diverse. Plutonium(V) can enter polymerization reactions. The largest chemical diversity among actinides is observed in americium, which can have valence between 2 and 6. Divalent americium is obtained only in dry compounds and non-aqueous solutions (acetonitrile). Oxidation states +3, +5 and +6 are typical for aqueous solutions, but also in the solid state. Tetravalent americium forms stable solid compounds (dioxide, fluoride and hydroxide) as well as complexes in aqueous solutions. It was reported that in alkaline solution americium can be oxidized to the heptavalent state, but these data proved erroneous. The most stable valence of americium is 3 in aqueous solution and 3 or 4 in solid compounds. Valence 3 is dominant in all subsequent elements up to lawrencium (with the exception of nobelium). Curium can be tetravalent in solids (fluoride, dioxide). Berkelium, along with a valence of +3, also shows the valence of +4, more stable than that of curium; the valence 4 is observed in solid fluoride and dioxide. The stability of Bk4+ in aqueous solution is close to that of Ce4+. Only valence 3 was observed for californium, einsteinium and fermium. The divalent state is proven for mendelevium and nobelium, and in nobelium it is more stable than the trivalent state. Lawrencium shows valence 3 both in solutions and solids. The redox potential \mathit E_\frac{M^4+}{AnO2^2+} increases from −0.32 V in uranium, through 0.34 V (Np) and 1.04 V (Pu) to 1.34 V in americium revealing the increasing reduction ability of the An4+ ion from americium to uranium. All actinides form AnH3 hydrides of black color with salt-like properties. Actinides also produce carbides with the general formula of AnC or AnC2 (U2C3 for uranium) as well as sulfides An2S3 and AnS2. Compounds Oxides and hydroxides An – actinide **Depending on the isotopes Some actinides can exist in several oxide forms such as An2O3, AnO2, An2O5 and AnO3. For all actinides, oxides AnO3 are amphoteric and An2O3, AnO2 and An2O5 are basic, they easily react with water, forming bases: An2O3 + 3 H2O → 2 An(OH)3. These bases are poorly soluble in water and by their activity are close to the hydroxides of rare-earth metals. Np(OH)3 has not yet been synthesized, Pu(OH)3 has a blue color while Am(OH)3 is pink and Cm(OH)3 is colorless. Bk(OH)3 and Cf(OH)3 are also known, as are tetravalent hydroxides for Np, Pu and Am and pentavalent for Np and Am. The strongest base is of actinium. All compounds of actinium are colorless, except for black actinium sulfide (Ac2S3). Dioxides of tetravalent actinides crystallize in the cubic system, same as in calcium fluoride. Thorium reacting with oxygen exclusively forms the dioxide: Th{} + O2 ->[\ce{1000^\circ C}] \overbrace{ThO2}^{Thorium~dioxide} Thorium dioxide is a refractory material with the highest melting point among any known oxide (3390 °C). Adding 0.8–1% ThO2 to tungsten stabilizes its structure, so the doped filaments have better mechanical stability to vibrations. To dissolve ThO2 in acids, it is heated to 500–600 °C; heating above 600 °C produces a very resistant to acids and other reagents form of ThO2. Small addition of fluoride ions catalyses dissolution of thorium dioxide in acids. Two protactinium oxides have been obtained: PaO2 (black) and Pa2O5 (white); the former is isomorphic with ThO2 and the latter is easier to obtain. Both oxides are basic, and Pa(OH)5 is a weak, poorly soluble base. Decomposition of certain salts of uranium, for example UO2(NO3)·6H2O in air at 400 °C, yields orange or yellow UO3. This oxide is amphoteric and forms several hydroxides, the most stable being uranyl hydroxide UO2(OH)2. Reaction of uranium(VI) oxide with hydrogen results in uranium dioxide, which is similar in its properties with ThO2. This oxide is also basic and corresponds to the uranium hydroxide U(OH)4. Plutonium, neptunium and americium form two basic oxides: An2O3 and AnO2. Neptunium trioxide is unstable; thus, only Np3O8 could be obtained so far. However, the oxides of plutonium and neptunium with the chemical formula AnO2 and An2O3 are well characterized. Salts *An – actinide **Depending on the isotopes Actinides easily react with halogens forming salts with the formulas MX3 and MX4 (X = halogen). So the first berkelium compound, BkCl3, was synthesized in 1962 with an amount of 3 nanograms. Like the halogens of rare earth elements, actinide chlorides, bromides, and iodides are water-soluble, and fluorides are insoluble. Uranium easily yields a colorless hexafluoride, which sublimates at a temperature of 56.5 °C; because of its volatility, it is used in the separation of uranium isotopes with gas centrifuge or gaseous diffusion. Actinide hexafluorides have properties close to anhydrides. They are very sensitive to moisture and hydrolyze forming AnO2F2. The pentachloride and black hexachloride of uranium were synthesized, but they are both unstable. Action of acids on actinides yields salts, and if the acids are non-oxidizing then the actinide in the salt is in low-valence state: U + 2 H2SO4 → U(SO4)2 + 2 H2 2 Pu + 6 HCl → 2 PuCl3 + 3 H2 However, in these reactions the regenerating hydrogen can react with the metal, forming the corresponding hydride. Uranium reacts with acids and water much more easily than thorium. Actinide salts can also be obtained by dissolving the corresponding hydroxides in acids. Nitrates, chlorides, sulfates and perchlorates of actinides are water-soluble. When crystallizing from aqueous solutions, these salts form hydrates, such as Th(NO3)4·6H2O, Th(SO4)2·9H2O and Pu2(SO4)3·7H2O. Salts of high-valence actinides easily hydrolyze. So, colorless sulfate, chloride, perchlorate and nitrate of thorium transform into basic salts with formulas Th(OH)2SO4 and Th(OH)3NO3. The solubility and insolubility of trivalent and tetravalent actinides is like that of lanthanide salts. So phosphates, fluorides, oxalates, iodates and carbonates of actinides are weakly soluble in water; they precipitate as hydrates, such as ThF4·3H2O and Th(CrO4)2·3H2O. Actinides with oxidation state +6, except for the AnO22+-type cations, form [AnO4]2−, [An2O7]2− and other complex anions. For example, uranium, neptunium and plutonium form salts of the Na2UO4 (uranate) and (NH4)2U2O7 (diuranate) types. In comparison with lanthanides, actinides more easily form coordination compounds, and this ability increases with the actinide valence. Trivalent actinides do not form fluoride coordination compounds, whereas tetravalent thorium forms K2ThF6, KThF5, and even K5ThF9 complexes. Thorium also forms the corresponding sulfates (for example Na2SO4·Th(SO4)2·5H2O), nitrates and thiocyanates. Salts with the general formula An2Th(NO3)6·nH2O are of coordination nature, with the coordination number of thorium equal to 12. Even easier is to produce complex salts of pentavalent and hexavalent actinides. The most stable coordination compounds of actinides – tetravalent thorium and uranium – are obtained in reactions with diketones, e.g. acetylacetone. Applications While actinides have some established daily-life applications, such as in smoke detectors (americium) and gas mantles (thorium), they are mostly used in nuclear weapons and as fuel in nuclear reactors. The last two areas exploit the property of actinides to release enormous energy in nuclear reactions, which under certain conditions may become self-sustaining chain reactions. The most important isotope for nuclear power applications is uranium-235. It is used in the thermal reactor, and its concentration in natural uranium does not exceed 0.72%. This isotope strongly absorbs thermal neutrons releasing much energy. One fission act of 1 gram of 235U converts into about 1 MW·day. Of importance, is that emits more neutrons than it absorbs; upon reaching the critical mass, enters into a self-sustaining chain reaction. Typically, uranium nucleus is divided into two fragments with the release of 2–3 neutrons, for example: + ⟶ + + 3 Other promising actinide isotopes for nuclear power are thorium-232 and its product from the thorium fuel cycle, uranium-233. Emission of neutrons during the fission of uranium is important not only for maintaining the nuclear chain reaction, but also for the synthesis of the heavier actinides. Uranium-239 converts via β-decay into plutonium-239, which, like uranium-235, is capable of spontaneous fission. The world's first nuclear reactors were built not for energy, but for producing plutonium-239 for nuclear weapons. About half of produced thorium is used as the light-emitting material of gas mantles. Thorium is also added into multicomponent alloys of magnesium and zinc. Mg-Th alloys are light and strong, but also have high melting point and ductility and thus are widely used in the aviation industry and in the production of missiles. Thorium also has good electron emission properties, with long lifetime and low potential barrier for the emission. The relative content of thorium and uranium isotopes is widely used to estimate the age of various objects, including stars (see radiometric dating). The major application of plutonium has been in nuclear weapons, where the isotope plutonium-239 was a key component due to its ease of fission and availability. Plutonium-based designs allow reducing the critical mass to about a third of that for uranium-235. The "Fat Man"-type plutonium bombs produced during the Manhattan Project used explosive compression of plutonium to obtain significantly higher densities than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6.2 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT. (See also Nuclear weapon design.) Hypothetically, as little as 4 kg of plutonium—and maybe even less—could be used to make a single atomic bomb using very sophisticated assembly designs. Plutonium-238 is potentially more efficient isotope for nuclear reactors, since it has smaller critical mass than uranium-235, but it continues to release much thermal energy (0.56 W/g) by decay even when the fission chain reaction is stopped by control rods. Its application is limited by its high price (about US$1000/g). This isotope has been used in thermopiles and water distillation systems of some space satellites and stations. The Galileo and Apollo spacecraft (e.g. Apollo 14) had heaters powered by kilogram quantities of plutonium-238 oxide; this heat is also transformed into electricity with thermopiles. The decay of plutonium-238 produces relatively harmless alpha particles and is not accompanied by gamma rays. Therefore, this isotope (~160 mg) is used as the energy source in heart pacemakers where it lasts about 5 times longer than conventional batteries. Actinium-227 is used as a neutron source. Its high specific energy (14.5 W/g) and the possibility of obtaining significant quantities of thermally stable compounds are attractive for use in long-lasting thermoelectric generators for remote use. 228Ac is used as an indicator of radioactivity in chemical research, as it emits high-energy electrons (2.18 MeV) that can be easily detected. 228Ac-228Ra mixtures are widely used as an intense gamma-source in industry and medicine. Development of self-glowing actinide-doped materials with durable crystalline matrices is a new area of actinide utilization as the addition of alpha-emitting radionuclides to some glasses and crystals may confer luminescence. Toxicity Radioactive substances can harm human health via (i) local skin contamination, (ii) internal exposure due to ingestion of radioactive isotopes, and (iii) external overexposure by β-activity and γ-radiation. Together with radium and transuranium elements, actinium is one of the most dangerous radioactive poisons with high specific α-activity. The most important feature of actinium is its ability to accumulate and remain in the surface layer of skeletons. At the initial stage of poisoning, actinium accumulates in the liver. Another danger of actinium is that it undergoes radioactive decay faster than being excreted. Adsorption from the digestive tract is much smaller (~0.05%) for actinium than radium. Protactinium in the body tends to accumulate in the kidneys and bones. The maximum safe dose of protactinium in the human body is 0.03 μCi that corresponds to 0.5 micrograms of 231Pa. This isotope, which might be present in the air as aerosol, is 2.5 times more toxic than hydrocyanic acid. Plutonium, when entering the body through air, food or blood (e.g. a wound), mostly settles in the lungs, liver and bones with only about 10% going to other organs, and remains there for decades. The long residence time of plutonium in the body is partly explained by its poor solubility in water. Some isotopes of plutonium emit ionizing α-radiation, which damages the surrounding cells. The median lethal dose (LD50) for 30 days in dogs after intravenous injection of plutonium is 0.32 milligram per kg of body mass, and thus the lethal dose for humans is approximately 22 mg for a person weighing 70 kg; the amount for respiratory exposure should be approximately four times greater. Another estimate assumes that plutonium is 50 times less toxic than radium, and thus permissible content of plutonium in the body should be 5 μg or 0.3 μCi. Such amount is nearly invisible under microscope. After trials on animals, this maximum permissible dose was reduced to 0.65 μg or 0.04 μCi. Studies on animals also revealed that the most dangerous plutonium exposure route is through inhalation, after which 5–25% of inhaled substances is retained in the body. Depending on the particle size and solubility of the plutonium compounds, plutonium is localized either in the lungs or in the lymphatic system, or is absorbed in the blood and then transported to the liver and bones. Contamination via food is the least likely way. In this case, only about 0.05% of soluble and 0.01% of insoluble compounds of plutonium absorbs into blood, and the rest is excreted. Exposure of damaged skin to plutonium would retain nearly 100% of it. Using actinides in nuclear fuel, sealed radioactive sources or advanced materials such as self-glowing crystals has many potential benefits. However, a serious concern is the extremely high radiotoxicity of actinides and their migration in the environment. Use of chemically unstable forms of actinides in MOX and sealed radioactive sources is not appropriate by modern safety standards. There is a challenge to develop stable and durable actinide-bearing materials, which provide safe storage, use and final disposal. A key need is application of actinide solid solutions in durable crystalline host phases. See also Actinides in the environment Lanthanides Major actinides Minor actinides Transuranics Notes References Bibliography External links Lawrence Berkeley Laboratory image of historic periodic table by Seaborg showing actinide series for the first time Lawrence Livermore National Laboratory, Uncovering the Secrets of the Actinides Los Alamos National Laboratory, Actinide Research Quarterly Periodic table
Actinide
[ "Chemistry" ]
12,900
[ "Periodic table" ]
2,322
https://en.wikipedia.org/wiki/Audio%20signal%20processing
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation. History The motivation for audio signal processing began at the beginning of the 20th century with inventions like the telephone, phonograph, and radio that allowed for the transmission and storage of audio signals. Audio processing was necessary for early radio broadcasting, as there were many problems with studio-to-transmitter links. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's early work on communication theory, sampling theory and pulse-code modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music. Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the University of Surrey in 1987. LPC is the basis for perceptual coding and is widely used in speech coding, while MDCT coding is widely used in modern audio coding formats such as MP3 and Advanced Audio Coding (AAC). Types Analog An analog audio signal is a continuous signal represented by an electrical voltage or current that is analogous to the sound waves in the air. Analog signal processing then involves physically altering the continuous signal by changing the voltage or current or charge via electrical circuits. Historically, before the advent of widespread digital technology, analog was the only method by which to manipulate a signal. Since that time, as computers and software have become more capable and affordable, digital signal processing has become the method of choice. However, in music applications, analog technology is often still desirable as it often produces nonlinear responses that are difficult to replicate with digital filters. Digital A digital representation expresses the audio waveform as a sequence of symbols, usually binary numbers. This permits signal processing using digital circuits such as digital signal processors, microprocessors and general-purpose computers. Most modern audio systems use a digital approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing. Applications Processing methods and application areas include storage, data compression, music information retrieval, speech processing, localization, acoustic detection, transmission, noise cancellation, acoustic fingerprinting, sound recognition, synthesis, and enhancement (e.g. equalization, filtering, level compression, echo and reverb removal or addition, etc.). Audio broadcasting Audio signal processing is used when broadcasting audio signals in order to enhance their fidelity or optimize for bandwidth or latency. In this domain, the most important audio processing takes place just before the transmitter. The audio processor here must prevent or minimize overmodulation, compensate for non-linear transmitters (a potential issue with medium wave and shortwave broadcasting), and adjust overall loudness to the desired level. Active noise control Active noise control is a technique designed to reduce unwanted sound. By creating a signal that is identical to the unwanted noise but with the opposite polarity, the two signals cancel out due to destructive interference. Audio synthesis Audio synthesis is the electronic generation of audio signals. A musical instrument that accomplishes this is called a synthesizer. Synthesizers can either imitate sounds or generate new ones. Audio synthesis is also used to generate human speech using speech synthesis. Audio effects Audio effects alter the sound of a musical instrument or other audio source. Common effects include distortion, often used with electric guitar in electric blues and rock music; dynamic effects such as volume pedals and compressors, which affect loudness; filters such as wah-wah pedals and graphic equalizers, which modify frequency ranges; modulation effects, such as chorus, flangers and phasers; pitch effects such as pitch shifters; and time effects, such as reverb and delay, which create echoing sounds and emulate the sound of different spaces. Musicians, audio engineers and record producers use effects units during live performances or in the studio, typically with electric guitar, bass guitar, electronic keyboard or electric piano. While effects are most frequently used with electric or electronic instruments, they can be used with any audio source, such as acoustic instruments, drums, and vocals. Computer audition Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents." Inspired by models of human audition, CA deals with questions of representation, transduction, grouping, use of musical knowledge and general sound semantics for the purpose of performing intelligent operations on audio and music signals by the computer. Technically this requires a combination of methods from the fields of signal processing, auditory modelling, music perception and cognition, pattern recognition, and machine learning, as well as more traditional methods of artificial intelligence for musical knowledge representation. See also Sound card Sound effect References Further reading Audio electronics Signal processing
Audio signal processing
[ "Technology", "Engineering" ]
1,282
[ "Audio electronics", "Telecommunications engineering", "Computer engineering", "Signal processing", "Audio engineering" ]
2,362
https://en.wikipedia.org/wiki/Antibody
An antibody (Ab) or immunoglobulin (Ig) is a large, Y-shaped protein belonging to the immunoglobulin superfamily which is used by the immune system to identify and neutralize antigens such as bacteria and viruses, including those that cause disease. Antibodies can recognize virtually any size antigen, able to perceive diverse chemical compositions. Each antibody recognizes one or more specific antigens. Antigen literally means "antibody generator", as it is the presence of an antigen that drives the formation of an antigen-specific antibody. Each tip of the "Y" of an antibody contains a paratope that specifically binds to one particular epitope on an antigen, allowing the two molecules to bind together with precision. Using this mechanism, antibodies can effectively "tag" a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion). More narrowly, an antibody (Ab) can refer to the free (secreted) form of these proteins, as opposed to the membrane-bound form found in a B cell receptor. The term immunoglobulin can then refer to both forms. Since they are, broadly speaking, the same protein, the terms are often treated as synonymous. To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety. The rest of the antibody structure is much less variable; in humans, antibodies occur in five classes, sometimes called isotypes: IgA, IgD, IgE, IgG, and IgM. Human IgG and IgA antibodies are also divided into discrete subclasses (IgG1, IgG2, IgG3, IgG4; IgA1 and IgA2). The class refers to the functions triggered by the antibody (also known as effector functions), in addition to some other structural features. Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response. Between species, while classes and subclasses of antibodies may be shared (at least in name), their functions and distribution throughout the body may be different. For example, mouse IgG1 is closer to human IgG2 than human IgG1 in terms of its function. The term humoral immunity is often treated as synonymous with the antibody response, describing the function of the immune system that exists in the body's humors (fluids) in the form of soluble proteins, as distinct from cell-mediated immunity, which generally describes the responses of T cells (especially cytotoxic T cells). In general, antibodies are considered part of the adaptive immune system, though this classification can become complicated. For example, natural IgM, which are made by B-1 lineage cells that have properties more similar to innate immune cells than adaptive, refers to IgM antibodies made independently of an immune response that demonstrate polyreactivity- they recognize multiple distinct (unrelated) antigens. These can work with the complement system in the earliest phases of an immune response to help facilitate clearance of the offending antigen and delivery of the resulting immune complexes to the lymph nodes or spleen for initiation of an immune response. Hence in this capacity, the function of antibodies is more akin to that of innate immunity than adaptive. Nonetheless, in general antibodies are regarded as part of the adaptive immune system because they demonstrate exceptional specificity (with some exception), are produced through genetic rearrangements (rather than being encoded directly in germline), and are a manifestation of immunological memory. In the course of an immune response, B cells can progressively differentiate into antibody-secreting cells or into memory B cells. Antibody-secreting cells comprise plasmablasts and plasma cells, which differ mainly in the degree to which they secrete antibody, their lifespan, metabolic adaptations, and surface markers. Plasmablasts are rapidly proliferating, short-lived cells produced in the early phases of the immune response (classically described as arising extrafollicularly rather than from the germinal center) which have the potential to differentiate further into plasma cells. Occasionally plasmablasts are described as short-lived plasma cells, formally this is incorrect. Plasma cells, in contrast, do not divide (they are terminally differentiated), and rely on survival niches comprising specific cell types and cytokines to persist. Plasma cells will secrete huge quantities of antibody regardless of whether or not their cognate antigen is present, ensuring that antibody levels to the antigen in question do not fall to 0, provided the plasma cell stays alive. The rate of antibody secretion, however, can be regulated, for example, by the presence of adjuvant molecules that stimulate the immune response such as TLR ligands. Long-lived plasma cells can live for potentially the entire lifetime of the organism. Classically, the survival niches that house long-lived plasma cells reside in the bone marrow, though it cannot be assumed that any given plasma cell in the bone marrow will be long-lived. However, other work indicates that survival niches can readily be established within the mucosal tissues- though the classes of antibodies involved show a different hierarchy from those in the bone marrow. B cells can also differentiate into memory B cells which can persist for decades similarly to long-lived plasma cells. These cells can be rapidly recalled in a secondary immune response, undergoing class switching, affinity maturation, and differentiating into antibody-secreting cells. Antibodies are central to the immune protection elicited by most vaccines and infections (although other components of the immune system certainly participate and for some diseases are considerably more important than antibodies in generating an immune response, e.g. herpes zoster). Durable protection from infections caused by a given microbe – that is, the ability of the microbe to enter the body and begin to replicate (not necessarily to cause disease) – depends on sustained production of large quantities of antibodies, meaning that effective vaccines ideally elicit persistent high levels of antibody, which relies on long-lived plasma cells. At the same time, many microbes of medical importance have the ability to mutate to escape antibodies elicited by prior infections, and long-lived plasma cells cannot undergo affinity maturation or class switching. This is compensated for through memory B cells: novel variants of a microbe that still retain structural features of previously encountered antigens can elicit memory B cell responses that adapt to those changes. It has been suggested that long-lived plasma cells secrete B cell receptors with higher affinity than those on the surfaces of memory B cells, but findings are not entirely consistent on this point. Structure Antibodies are heavy (~150 kDa) proteins of about 10 nm in size, arranged in three globular regions that roughly form a Y shape. In humans and most other mammals, an antibody unit consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds. Each chain is a series of domains: somewhat similar sequences of about 110 amino acids each. These domains are usually represented in simplified schematics as rectangles. Light chains consist of one variable domain VL and one constant domain CL, while heavy chains contain one variable domain VH and three to four constant domains CH1, CH2, ... Structurally an antibody is also partitioned into two antigen-binding fragments (Fab), containing one VL, VH, CL, and CH1 domain each, as well as the crystallisable fragment (Fc), forming the trunk of the Y shape. In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily. In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction. Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ. This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ (gamma) heavy chains which characterize the IgG class of antibodies. Antigen-binding site The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen. More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody. When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody. These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen. Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen. Typically though, only a few residues contribute to most of the binding energy. The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger antigen-antibody complexes. The structures of CDRs have been clustered and classified by Chothia et al. and more recently by North et al. and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities. In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes. Fc region The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen. Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway. Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus. In addition to this, binding to FcRn endows IgG with an exceptionally long half-life relative to other plasma proteins of 3-4 weeks. IgG3 in most cases (depending on allotype) has mutations at the FcRn binding site which lower affinity for FcRn, which are thought to have evolved to limit the highly inflammatory effects of this subclass. Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues. These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules. Protein structure The N-terminus of each chain is situated at the tip. Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily: it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif. The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond. Antibody complexes Secreted antibodies can occur as a single Y-shaped unit, a monomer. However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units). IgG can also form hexamers, though no J chain is required. IgA tetramers and pentamers have also been reported. Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex. Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc. Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies. An extreme example is the clumping, or agglutination, of red blood cells with antibodies in blood typing to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation. B cell receptors The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors. These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences. Classes Antibodies can come in different varieties known as isotypes or classes. In humans there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2. The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively. The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region. The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table. For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do exist symptoms very similar to yet not technically asthma). The antibody's variable region binds to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules. The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system. Light chain types In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei). In non-mammalian animals In most placental mammals, the structure of antibodies is generally the same. Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier. Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies. Antibody–antigen interactions The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants. Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities. Function The main categories of antibody action include the following: Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following: Lysis of the foreign cell Encouragement of inflammation by chemotactically attracting inflammatory cells More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity. Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures. At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens). Activation of complement Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis). Activation of effector cells To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region. Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens. Natural antibodies Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Antibodies are produced exclusively by B cells in response to antigens where initially, antibodies are formed as membrane-bound receptors, but upon activation by antigens and helper T cells, B cells differentiate to produce soluble antibodies. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. These antibodies undergo quality checks in the endoplasmic reticulum (ER), which contains proteins that assist in proper folding and assembly. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue. Immunoglobulin diversity Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes. Domain variability The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences between the variable domains are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below. V(D)J recombination Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells. RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur. After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain. Somatic hypermutation and affinity maturation Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains. This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells. Class switching Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment. Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype. Specificity designations An antibody can be called monospecific if it has specificity for a single antigen or epitope, or bispecific if it has affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell. Asymmetrical antibodies Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single-chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but does not suppress homodimer formation. To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but are not limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms. Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms do not have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality. Interchromosomal DNA Transposition Antibody diversification typically occurs through somatic hypermutation, class switching, and affinity maturation targeting the BCR gene loci, but on occasion more unconventional forms of diversification have been documented. For example, in the case of malaria caused by Plasmodium falciparum, some antibodies from those who had been infected demonstrated an insertion from chromosome 19 containing a 98-amino acid stretch from leukocyte-associated immunoglobulin-like receptor 1, LAIR1, in the elbow joint. This represents a form of interchromosomal transposition. LAIR1 normally binds collagen, but can recognize repetitive interspersed families of polypeptides (RIFIN) family members that are highly expressed on the surface of P. falciparum-infected red blood cells. In fact, these antibodies underwent affinity maturation that enhanced affinity for RIFIN but abolished affinity for collagen. These "LAIR1-containing" antibodies have been found in 5-10% of donors from Tanzania and Mali, though not in European donors. European donors did show 100-1000 nucleotide stretches inside the elbow joints as well, however. This particular phenomenon may be specific to malaria, as infection is known to induce genomic instability. History The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something. The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization. In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies. Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies. Medical applications Disease diagnosis Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed. In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis. Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women. Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests. New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer. Disease therapy Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer. Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual. Prenatal therapy Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn. Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. This occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself. Research applications Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography. In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cells express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques. Antibodies used in research are some of the most powerful, yet most problematic reagents with a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11). Antibody regions can be used to further biomedical research by acting as a guide for drugs to reach their target. Several application involve using bacterial plasmids to tag plasmids with the Fc region of the antibody such as pFUSE-Fc plasmid. Regulations Production and testing There are several ways to obtain antibodies, including in vivo techniques like animal immunization and various in vitro approaches, such as the phage display method. Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include: The demonstration that the process is able to produce in good quality (the process should be validated) The efficiency of the antibody purification (all impurities and virus must be eliminated) The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...) Determination of the virus clearance studies Before clinical trials Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product. Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing). Preclinical studies Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models). Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible. Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects Structure prediction and computational antibody design The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enable computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities. The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs. There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches. Antibody mimetic Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have been developed and commercialized as research, diagnostic and therapeutic agents. Binding antibody unit BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity. See also Affimer Anti-mitochondrial antibodies Anti-nuclear antibodies Antibody mimetic Aptamer Colostrum ELISA Humoral immunity Immunology Immunosuppressive drug Intravenous immunoglobulin (IVIg) Magnetic immunoassay Microantibody Monoclonal antibody Neutralizing antibody Optimer Ligand Secondary antibodies Single-domain antibody Slope spectroscopy Surrobody Synthetic antibody Western blot normalization References External links Mike's Immunoglobulin Structure/Function Page at University of Cambridge Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford How Lymphocytes Produce Antibody from Cells Alive! Glycoproteins Immunology Reagents for biochemistry
Antibody
[ "Chemistry", "Biology" ]
11,074
[ "Reagents for biochemistry", "Biochemistry methods", "Biochemistry", "Immunology", "Glycoproteins", "Glycobiology" ]
2,389
https://en.wikipedia.org/wiki/Auger%20effect
The Auger effect (; ) or Auger−Meitner effect is a physical phenomenon in which atoms eject electrons. It occurs when an inner-shell vacancy in an atom is filled by an electron, releasing energy that causes the emission of another electron from a different shell of the same atom. When a core electron is removed, leaving a vacancy, an electron from a higher energy level may fall into the vacancy, resulting in a release of energy. For light atoms (Z<12), this energy is most often transferred to a valence electron which is subsequently ejected from the atom. This second ejected electron is called an Auger electron. For heavier atomic nuclei, the release of the energy in the form of an emitted photon becomes gradually more probable. Effect Upon ejection, the kinetic energy of the Auger electron corresponds to the difference between the energy of the initial electronic transition into the vacancy and the ionization energy for the electron shell from which the Auger electron was ejected. These energy levels depend on the type of atom and the chemical environment in which the atom was located. Auger electron spectroscopy involves the emission of Auger electrons by bombarding a sample with either X-rays or energetic electrons and measures the intensity of Auger electrons that result as a function of the Auger electron energy. The resulting spectra can be used to determine the identity of the emitting atoms and some information about their environment. Auger recombination is a similar Auger effect which occurs in semiconductors. An electron and electron hole (electron-hole pair) can recombine giving up their energy to an electron in the conduction band, increasing its energy. The reverse effect is known as impact ionization. The Auger effect can impact biological molecules such as DNA. Following the K-shell ionization of the component atoms of DNA, Auger electrons are ejected leading to damage of its sugar-phosphate backbone. Discovery The Auger emission process was observed and published in 1922 by Lise Meitner, an Austrian-Swedish physicist, as a side effect in her competitive search for the nuclear beta electrons with the British physicist Charles Drummond Ellis. The French physicist Pierre Victor Auger independently discovered it in 1923 upon analysis of a Wilson cloud chamber experiment and it became the central part of his PhD work. High-energy X-rays were applied to ionize gas particles and observe photoelectric electrons. The observation of electron tracks that were independent of the frequency of the incident photon suggested a mechanism for electron ionization that was caused from an internal conversion of energy from a radiationless transition. Further investigation, and theoretical work using elementary quantum mechanics and transition rate/transition probability calculations, showed that the effect was a radiationless effect more than an internal conversion effect. See also Auger therapy Charge carrier generation and recombination Characteristic X-ray Coster–Kronig transition Electron capture Radiative Auger effect References Atomic physics Foundational quantum physics Electron spectroscopy
Auger effect
[ "Physics", "Chemistry" ]
596
[ "Spectrum (physical sciences)", "Electron spectroscopy", "Foundational quantum physics", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", "Spectroscopy", " and optical physics" ]
2,408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern analytical techniques. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. History Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century. The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples. Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. Classical methods Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs. Qualitative analysis Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity. Chemical tests There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. Flame test Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient. Quantitative analysis Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis). Gravimetric analysis The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water. Volumetric analysis Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titration is a family of techniques used to determine the concentration of an analyte. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant. Instrumental methods Spectroscopy Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on. Mass spectrometry Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. In a mass spectrometer, a small amount of sample is ionized and converted to gaseous ions, where they are separated and analyzed according to their mass-to-charge ratios. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. Electrochemical analysis Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). Potentiometry measures the cell's potential, coulometry measures the cell's current, and voltammetry measures the change in current when cell potential changes. Thermal analysis Calorimetry and thermogravimetric analysis measure the interaction of a material and heat. Separation Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field. Chromatographic assays Chromatography can be used to determine the presence of substances in a sample as different components in a mixture have different tendencies to adsorb onto the stationary phase or dissolve in the mobile phase. Thus, different components of the mixture move at different speed. Different components of a mixture can therefore be identified by their respective Rƒ values, which is the ratio between the migration distance of the substance and the migration distance of the solvent front during chromatography. In combination with the instrumental methods, chromatography can be used in quantitative determination of the substances. Chromatography separates the analyte from the rest of the sample so that it may be measured without interference from other compounds. There are different types of chromatography that differ from the media they use to separate the analyte and the sample. In Thin-layer chromatography, the analyte mixture moves up and separates along the coated sheet under the volatile mobile phase. In Gas chromatography, gas separates the volatile analytes. A common method for chromatography using liquid as a mobile phase is High-performance liquid chromatography. Hybrid techniques Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry. Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. Microscopy The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries. Lab-on-a-chip Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. Errors Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment. In error the true value and observed value in chemical analysis can be related with each other by the equation where is the absolute error. is the true value. is the observed value. An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement. Errors can be expressed relatively. Given the relative error(): The percent error can also be calculated: If we want to use these values in a function, we may also want to calculate the error of the function. Let be a function with variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in : Standards Standard curve A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample. Internal standards Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution. Standard addition The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem. Signals and noise One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR). Noise can arise from environmental factors as well as from fundamental physical processes. Thermal noise Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum. The root mean square value of the thermal noise in a resistor is given by where kB is the Boltzmann constant, T is the temperature, R is the resistance, and is the bandwidth of the frequency . Shot noise Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal. Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by where e is the elementary charge and I is the average current. Shot noise is white noise. Flicker noise Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier. Environmental noise Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments. Noise reduction Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods. Applications Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry. Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (μTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used. Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules. Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on. The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic. Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations. See also Calorimeter Clinical chemistry Ion beam analysis List of chemical analysis methods Important publications in analytical chemistry List of materials analysis methods Measurement uncertainty Metrology Microanalysis Nuclear reaction analysis Quality of analytical results Radioanalytical chemistry Rutherford backscattering spectroscopy Sensory analysis - in the field of Food science Virtual instrumentation Working range References Further reading Gurdeep, Chatwal Anand (2008). Instrumental Methods of Chemical Analysis Himalaya Publishing House (India) Ralph L. Shriner, Reynold C. Fuson, David Y. Curtin, Terence C. Morill: The systematic identification of organic compounds - a laboratory manual, Verlag Wiley, New York 1980, 6. edition, . Bettencourt da Silva, R; Bulska, E; Godlewska-Zylkiewicz, B; Hedrich, M; Majcen, N; Magnusson, B; Marincic, S; Papadakis, I; Patriarca, M; Vassileva, E; Taylor, P; Analytical measurement: measurement uncertainty and statistics, 2012, . External links Infografik and animation showing the progress of analytical chemistry aas Atomic Absorption Spectrophotometer Materials science
Analytical chemistry
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,040
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
2,443
https://en.wikipedia.org/wiki/Acceleration
In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes: the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force; that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass. The SI unit for acceleration is metre per second squared (, ). For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralised in reference to the acceleration due to change in speed. Definition and properties Average acceleration An object's average acceleration over a period of time is its change in velocity, , divided by the duration of the period, . Mathematically, Instantaneous acceleration Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time: As acceleration is defined as the derivative of velocity, , with respect to time and velocity is defined as the derivative of position, , with respect to time, acceleration can be thought of as the second derivative of with respect to : (Here and elsewhere, if motion is in a straight line, vector quantities can be substituted by scalars in the equations.) By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function is the velocity function ; that is, the area under the curve of an acceleration vs. time ( vs. ) graph corresponds to the change of velocity. Likewise, the integral of the jerk function , the derivative of the acceleration function, can be used to find the change of acceleration at a certain time: Units Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second. Other forms An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration. Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer. In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law): where is the net force acting on the body, is the mass of the body, and is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large. Tangential and centripetal acceleration The velocity of a particle moving on a curved path as a function of time can be written as: with equal to the speed of travel along the path, and a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed and the changing direction of , the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as: where is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and is its instantaneous radius of curvature based upon the osculating circle at time . The components are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force), respectively. Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas. Special cases Uniform acceleration Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength (also called acceleration due to gravity). By Newton's Second Law the force acting on a body is given by: Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed: where is the elapsed time, is the initial displacement from the origin, is the displacement from the origin at time , is the initial velocity, is the velocity at time , and is the uniform rate of acceleration. In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth. Circular motion In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighbouring point, thereby rotating the velocity vector along the circle. For a given speed , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius of the circle, and increases as the square of this speed: For a given angular velocity , the centripetal acceleration is directly proportional to radius . This is due to the dependence of velocity on the radius . Expressing centripetal acceleration vector in polar components, where is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields As usual in rotations, the speed of a particle may be expressed as an angular speed with respect to a point at the distance as Thus This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion. In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius for the centripetal acceleration. The tangential component is given by the angular acceleration , i.e., the rate of change of the angular speed times the radius . That is, The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration (), and the tangent is always directed at right angles to the radius vector. Coordinate systems In multi-dimensional Cartesian coordinate systems, acceleration is broken up into components that correspond with each dimensional axis of the coordinate system. In a two-dimensional system, where there is an x-axis and a y-axis, corresponding acceleration components are defined as The two-dimensional acceleration vector is then defined as . The magnitude of this vector is found by the distance formula asIn three-dimensional systems where there is an additional z-axis, the corresponding acceleration component is defined asThe three-dimensional acceleration vector is defined as with its magnitude being determined by Relation to relativity Special relativity The special theory of relativity describes the behaviour of objects travelling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations. As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it. General relativity Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating. Conversions See also Acceleration (differential geometry) Four-vector: making the connection between space and time explicit Gravitational acceleration Inertia Orders of magnitude (acceleration) Shock (mechanics) Shock and vibration data logger measuring 3-axis acceleration Space travel using constant acceleration Specific force References External links Acceleration Calculator Simple acceleration unit converter Dynamics (mechanics) Kinematic properties Vector physical quantities
Acceleration
[ "Physics", "Mathematics" ]
2,292
[ "Physical phenomena", "Mechanical quantities", "Physical quantities", "Acceleration", "Quantity", "Classical mechanics", "Motion (physics)", "Kinematic properties", "Dynamics (mechanics)", "Vector physical quantities", "Wikipedia categories named after physical quantities" ]
2,457
https://en.wikipedia.org/wiki/Apoptosis
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses 50 to 70 billion cells each day due to apoptosis. For the average human child between 8 and 14 years old, each day the approximate loss is 20 to 30 billion cells. In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them. Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately. In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis. Discovery and etymology German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at the University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz. For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of the first component of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. This occurred in 1988 when it was shown that BCL2, the gene responsible for follicular lymphoma, encoded a protein that inhibited cell death. The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis. In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ) and the second p pronounced (). In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc. In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation: We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid. Activation mechanisms The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC). A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell death. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis. Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain. Intrinsic pathway The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis. During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor – 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3. Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability. Extrinsic pathway Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals. TNF pathway TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the protein TRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis. Fas pathway The fas receptor (First apoptosis signal) – (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8. Common components Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family. Caspases Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases (caspases 2, 8, 9, 10, 11, and 12) and effector caspases (caspases 3, 6, and 7). The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program. Caspase-independent apoptotic pathway There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor). Apoptosis model in amphibians The frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibian's metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog. Negative regulators of apoptosis Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB. Proteolytic caspase cascade: Killing the cell Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis. A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include: Cell shrinkage and rounding occur because of the retraction of lamellipodia and the breakdown of the proteinaceous cytoskeleton by caspases. The cytoplasm appears dense, and the organelles appear tightly packed. Chromatin undergoes condensation into compact patches against the nuclear envelope (also known as the perinuclear envelope) in a process known as pyknosis, a hallmark of apoptosis. The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA. Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death. Apoptotic cell disassembly Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly: Membrane blebbing: The cell membrane shows irregular buds known as blebs. Initially these are smaller surface blebs. Later these can grow into larger so-called dynamic membrane blebs. An important regulator of apoptotic cell membrane blebbing is ROCK1 (rho associated coiled-coil-containing protein kinase 1). Formation of membrane protrusions: Some cell types, under specific conditions, may develop different types of long, thin extensions of the cell membrane called membrane protrusions. Three types have been described: microtubule spikes, apoptopodia (feet of death), and beaded apoptopodia (the latter having a beads-on-a-string appearance). Pannexin 1 is an important component of membrane channels involved in the formation of apoptopodia and beaded apoptopodia. Fragmentation: The cell breaks apart into multiple vesicles called apoptotic bodies, which undergo phagocytosis. The plasma membrane protrusions may help bring apoptotic bodies closer to phagocytes. Removal of dead cells The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis. Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation. Pathway knock-outs Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons. The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation . A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality . However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure . These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation . A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist. Methods for distinguishing apoptotic from necrotic cells Label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy can be used to compare apoptotic and necrotic cells. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). Supernatant screening for caspases, HMGB1, and cytokeratin 18 release can identify primary from secondary necrotic cells. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references. Implication in disease Defective pathways The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased. A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis. Dysregulation of p53 The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair; however, it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors. Inhibition Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis". HeLa cell Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur. Treatments The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway. Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis. Hyperactive apoptosis On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the "Inverse Warburg hypothesis"). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated. At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM. Treatments Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKKα, which leads to NF-κB activation and cell survival. Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type. HIV progression The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways: HIV enzymes deactivate anti-apoptotic Bcl-2. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. In parallel, these enzymes activate proapoptotic procaspase-8, which does directly activate the mitochondrial events of apoptosis. HIV may increase the level of cellular proteins that prompt Fas-mediated apoptosis. HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane. Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T helper cells. HIV decreases the production of molecules involved in marking the cell for apoptosis, giving the virus time to replicate and continue releasing apoptotic agents and virions into the surrounding tissue. The infected CD4+ cell may also receive the death signal from a cytotoxic T cell. Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200. Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV. Viral infection Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells. Viruses can trigger apoptosis of infected cells via a range of mechanisms including: Receptor binding Activation of protein kinase R (PKR) Interaction with p53 Expression of viral proteins coupled to MHC proteins on the surface of the infected cell, allowing recognition by cells of the immune system (such as natural killer and cytotoxic T cells) that then induce the infected cell to undergo apoptosis. Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro. Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade. The Oropouche virus (OROV) is found in the family Bunyaviridae. The study of apoptosis brought on by Bunyaviridae was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice. OROV is a disease that is transmitted between humans by the biting midge (Culicoides paraensis). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever. The Oropouche virus also causes disruption in cultured cells – cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected. With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway. In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria. Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function. Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. Prions can cause apoptosis in neurons. Plants Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Additionally, plants do not contain phagocytic cells, which are essential in the process of breaking down and removing apoptotic bodies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name apoptosis (as opposed to the more general programmed cell death) is unclear. Caspase-independent apoptosis The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease. See also Anoikis Apaf-1 Apo2.7 Apoptotic DNA fragmentation Atromentin induces apoptosis in human leukemia U937 cells. Autolysis Autophagy Cisplatin Cytotoxicity Entosis Ferroptosis Homeostasis Immunology Necrobiosis Necrosis Necrotaxis Nemosis Mitotic catastrophe p53 Paraptosis Pseudoapoptosis PI3K/AKT/mTOR pathway Explanatory footnotes Citations General bibliography External links Apoptosis & Caspase 3, The Proteolysis Map – animation Apoptosis & Caspase 8, The Proteolysis Map – animation Apoptosis & Caspase 7, The Proteolysis Map – animation Apoptosis MiniCOPE Dictionary – list of apoptosis terms and acronyms Apoptosis (Programmed Cell Death) – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology Apoptosis Research Portal Apoptosis Info Apoptosis protocols, articles, news, and recent publications. Database of proteins involved in apoptosis Apoptosis Video Apoptosis Video (WEHI on YouTube ) The Mechanisms of Apoptosis Kimball's Biology Pages. Simple explanation of the mechanisms of apoptosis triggered by internal signals (bcl-2), along the caspase-9, caspase-3 and caspase-7 pathway; and by external signals (FAS and TNF), along the caspase 8 pathway. Accessed 25 March 2007. WikiPathways – Apoptosis pathway "Finding Cancer's Self-Destruct Button". CR magazine (Spring 2007). Article on apoptosis and cancer. Xiaodong Wang's lecture: Introduction to Apoptosis Robert Horvitz's Short Clip: Discovering Programmed Cell Death The Bcl-2 Database DeathBase: a database of proteins involved in cell death, curated by experts European Cell Death Organization Apoptosis signaling pathway created by Cusabio Cell signaling Cellular senescence Immunology Medical aspects of death Programmed cell death
Apoptosis
[ "Chemistry", "Biology" ]
9,058
[ "Signal transduction", "Senescence", "Cellular senescence", "Immunology", "Cellular processes", "Apoptosis", "Programmed cell death" ]
2,504
https://en.wikipedia.org/wiki/Amphetamine
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity; it is also used to treat binge eating disorder in the form of its inactive prodrug lisdexamfetamine. Amphetamine was discovered as a chemical in 1887 by Lazăr Edeleanu, and then as a drug in the late 1920s. It exists as two enantiomers: levoamphetamine and dextroamphetamine. Amphetamine properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use. The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems. At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, decreased appetite, elevated heart rate, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., hallucinations, delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects. Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group. Uses Medical Amphetamine is used to treat attention deficit hyperactivity disorder (ADHD), narcolepsy, obesity, and, in the form of lisdexamfetamine, binge eating disorder. It is sometimes prescribed for its past medical indications, particularly for depression and chronic pain. ADHD Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, long-term use of pharmaceutical amphetamines at therapeutic doses appears to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia. Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. Additionally, a 2024 meta-analytic systematic review reported moderate improvements in quality of life when amphetamine treatment is used for ADHD. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult. A 2025 meta-analytic systematic review of 113 randomized controlled trials demonstrated that stimulant medications significantly improved core ADHD symptoms in adults over a three-month period, with good acceptability compared to other pharmacological and non-pharmacological treatments. Models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Stimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals. Binge eating disorder Binge eating disorder (BED) is characterized by recurrent and persistent episodes of compulsive binge eating. These episodes are often accompanied by marked distress and a feeling of loss of control over eating. The pathophysiology of BED is not fully understood, but it is believed to involve dysfunctional dopaminergic reward circuitry along the cortico-striatal-thalamic-cortical loop. As of July 2024, lisdexamfetamine is the only USFDA- and TGA-approved pharmacotherapy for BED. Evidence suggests that lisdexamfetamine's treatment efficacy in BED is underpinned at least in part by a psychopathological overlap between BED and ADHD, with the latter conceptualized as a cognitive control disorder that also benefits from treatment with lisdexamfetamine. Lisdexamfetamine's therapeutic effects for BED primarily involve direct action in the central nervous system after conversion to its pharmacologically active metabolite, dextroamphetamine. Centrally, dextroamphetamine increases neurotransmitter activity of dopamine and norepinephrine in prefrontal cortical regions that regulate cognitive control of behavior. Similar to its therapeutic effect in ADHD, dextroamphetamine enhances cognitive control and may reduce impulsivity in patients with BED by enhancing the cognitive processes responsible for overriding prepotent feeding responses that precede binge eating episodes. In addition, dextroamphetamine's actions outside of the central nervous system may also contribute to its treatment effects in BED. Peripherally, dextroamphetamine triggers lipolysis through noradrenergic signaling in adipose fat cells, leading to the release of triglycerides into blood plasma to be utilized as a fuel substrate. Dextroamphetamine also activates TAAR1 in peripheral organs along the gastrointestinal tract that are involved in the regulation of food intake and body weight. Together, these actions confer an anorexigenic effect that promotes satiety in response to feeding and may decrease binge eating as a secondary effect. Medical reviews of randomized controlled trials have demonstrated that lisdexamfetamine, at doses between 50–70 mg, is safe and effective for the treatment of moderate-to-severe BED in adults. These reviews suggest that lisdexamfetamine is persistently effective at treating BED and is associated with significant reductions in the number of binge eating days and binge eating episodes per week. Furthermore, a meta-analytic systematic review highlighted an open-label, 12-month extension safety and tolerability study that reported lisdexamfetamine remained effective at reducing the number of binge eating days for the duration of the study. In addition, both a review and a meta-analytic systematic review found lisdexamfetamine to be superior to placebo in several secondary outcome measures, including persistent binge eating cessation, reduction of obsessive-compulsive related binge eating symptoms, reduction of body-weight, and reduction of triglycerides. Lisdexamfetamine, like all pharmaceutical amphetamines, has direct appetite suppressant effects that may be therapeutically useful in both BED and its comorbidities. Based on reviews of neuroimaging studies involving BED-diagnosed participants, therapeautic neuroplasticity in dopaminergic and noradrenergic pathways from long-term use of lisdexamfetamine may be implicated in lasting improvements in the regulation of eating behaviors that are observed even after the drug is discontinued. Narcolepsy Narcolepsy is a chronic sleep-wake disorder that is associated with excessive daytime sleepiness, cataplexy, and sleep paralysis. Patients with narcolepsy are diagnosed as either type 1 or type 2, with only the former presenting cataplexy symptoms. Type 1 narcolepsy results from the loss of approximately 70,000 orexin-releasing neurons in the lateral hypothalamus, leading to significantly reduced cerebrospinal orexin levels; this reduction is a diagnostic biomarker for type 1 narcolepsy. Lateral hypothalamic orexin neurons innervate every component of the ascending reticular activating system (ARAS), which includes noradrenergic, dopaminergic, histaminergic, and serotonergic nuclei that promote wakefulness. Amphetamine’s therapeutic mode of action in narcolepsy primarily involves increasing monoamine neurotransmitter activity in the ARAS. This includes noradrenergic neurons in the locus coeruleus, dopaminergic neurons in the ventral tegmental area, histaminergic neurons in the tuberomammillary nucleus, and serotonergic neurons in the dorsal raphe nucleus. Dextroamphetamine, the more dopaminergic enantiomer of amphetamine, is particularly effective at promoting wakefulness because dopamine release has the greatest influence on cortical activation and cognitive arousal, relative to other monoamines. In contrast, levoamphetamine may have a greater effect on cataplexy, a symptom more sensitive to the effects of norepinephrine and serotonin. Noradrenergic and serotonergic nuclei in the ARAS are involved in the regulation of the REM sleep cycle and function as "REM-off" cells, with amphetamine's effect on norepinephrine and serotonin contributing to the suppression of REM sleep and a possible reduction of cataplexy at high doses. The American Academy of Sleep Medicine (AASM) 2021 clinical practice guideline conditionally recommends dextroamphetamine for the treatment of both type 1 and type 2 narcolepsy. Treatment with pharmaceutical amphetamines is generally less preferred relative to other stimulants (e.g., modafinil) and is considered a third-line treatment option. Medical reviews indicate that amphetamine is safe and effective for the treatment of narcolepsy. Amphetamine appears to be most effective at improving symptoms associated with hypersomnolence, with three reviews finding clinically significant reductions in daytime sleepiness in patients with narcolepsy. Additionally, these reviews suggest that amphetamine may dose-dependently improve cataplexy symptoms. However, the quality of evidence for these findings is low and is consequently reflected in the AASM's conditional recommendation for dextroamphetamine as a treatment option for narcolepsy. Enhancing performance Cognitive performance In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine D1 receptor and α2-adrenergic receptor in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control. Physical performance Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature. Recreational Amphetamine, specifically the more dopaminergic dextrorotatory enantiomer (dextroamphetamine), is also used recreationally as a euphoriant and aphrodisiac, and like other amphetamines; is used as a club drug for its energetic and euphoric high. Dextroamphetamine (d-amphetamine) is considered to have a high potential for misuse in a recreational manner since individuals typically report feeling euphoric, more alert, and more energetic after taking the drug. A notable part of the 1960s mod subculture in the UK was recreational amphetamine use, which was used to fuel all-night dances at clubs like Manchester's Twisted Wheel. Newspaper reports described dancers emerging from clubs at 5 a.m. with dilated pupils. Mods used the drug for stimulation and alertness, which they viewed as different from the intoxication caused by alcohol and other drugs. Dr. Andrew Wilson argues that for a significant minority, "amphetamines symbolised the smart, on-the-ball, cool image" and that they sought "stimulation not intoxication [...] greater awareness, not escape" and "confidence and articulacy" rather than the "drunken rowdiness of previous generations." Dextroamphetamine's dopaminergic (rewarding) properties affect the mesocorticolimbic circuit; a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), positive reinforcement and positively-valenced emotions, particularly ones involving pleasure. Large recreational doses of dextroamphetamine may produce symptoms of dextroamphetamine overdose. Recreational users sometimes open dexedrine capsules and crush the contents in order to insufflate (snort) it or subsequently dissolve it in water and inject it. Immediate-release formulations have higher potential for abuse via insufflation (snorting) or intravenous injection due to a more favorable pharmacokinetic profile and easy crushability (especially tablets). Injection into the bloodstream can be dangerous because insoluble fillers within the tablets can block small blood vessels. Chronic overuse of dextroamphetamine can lead to severe drug dependence, resulting in withdrawal symptoms when drug use stops. Contraindications According to the International Programme on Chemical Safety (IPCS) and the U.S. Food and Drug Administration (FDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the FDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the FDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical. Adverse effects The adverse side effects of amphetamine are many and varied, and the amount of amphetamine used is the primary factor in determining the likelihood and severity of adverse effects. Amphetamine products such as Adderall, Dexedrine, and their generic equivalents are currently approved by the U.S. FDA for long-term therapeutic use. Recreational use of amphetamine generally involves much larger doses, which have a greater risk of serious adverse drug effects than dosages used for therapeutic purposes. Physical Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses. Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids. FDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease. Psychological At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the FDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility. Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine. Reinforcement disorders Addiction Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increases the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction. Biomolecular mechanisms Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are Delta FBJ murine osteosarcoma viral oncogene homolog B (ΔFosB), cAMP response element binding protein (CREB), and nuclear factor-kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others. ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs. The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation. Pharmacological treatments there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction. A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil. Behavioral treatments A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these. Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system. Dependence and withdrawal Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect. According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for  weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose. Overdose An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "amphetamine use disorder" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence). Toxicity In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability. Psychosis An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use. Drug interactions Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD. Norepinephrine reuptake inhibitors (NRIs) like atomoxetine prevent norepinephrine release induced by amphetamines and have been found to reduce the stimulant, euphoriant, and sympathomimetic effects of dextroamphetamine in humans. In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic). Pharmacology Pharmacodynamics Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum. Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons. In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival in vitro. The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity. The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it affects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain. Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects. Dopamine In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state. Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at . Norepinephrine Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from . Serotonin Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor. Other neurotransmitters, peptides, hormones, and enzymes Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis. In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma. Pharmacokinetics The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically 90%. Amphetamine is a weak base with a pKa of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue. The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are  hours and  hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose. CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine N-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following: Pharmacomicrobiomics The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics. Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of E. coli commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds. Related endogenous compounds Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , a structural isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine N-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine. Chemistry Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of . Substituted derivatives The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups. Synthesis Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt. A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine. A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4). A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6). Detection in body fluids Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for  days. For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug. History, society, and culture Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes. Amphetamine is illegally synthesized in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from  per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA. Legal status As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Brazil (class A3), Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment. Pharmaceutical products Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as "amphetamine". Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below. Notes Image legend Reference notes References External links  – Dextroamphetamine  – Levoamphetamine Comparative Toxicogenomics Database entry: Amphetamine Comparative Toxicogenomics Database entry: CARTPT 5-HT1A agonists Anorectics Aphrodisiacs Attention deficit hyperactivity disorder management Carbonic anhydrase activators Drugs acting on the cardiovascular system Drugs acting on the nervous system Drugs in sport Ergogenic aids Euphoriants Excitatory amino acid reuptake inhibitors German inventions Human drug metabolites Monoaminergic activity enhancers Narcolepsy Nootropics Norepinephrine-dopamine releasing agents Phenethylamines Stimulants Substituted amphetamines TAAR1 agonists VMAT inhibitors World Anti-Doping Agency prohibited substances
Amphetamine
[ "Chemistry" ]
13,240
[ "Chemicals in medicine", "Human drug metabolites" ]
4,083,646
https://en.wikipedia.org/wiki/Electrofusion
Electrofusion is a method of joining MDPE, HDPE and other plastic pipes using special fittings that have built-in electric heating elements which are used to weld the joint together. The pipes to be joined are cleaned, inserted into the electrofusion fitting and then alignment clamps and a voltage (typically 40V) is applied for a fixed time depending on the fitting in use. The built in heater coils then melt the inside of the fitting and the outside of the pipe wall, which weld together producing a very strong homogeneous joint. The assembly is then left to cool for a specified time. Electrofusion welding is beneficial because it does not require the operator to use dangerous or sophisticated equipment. After some preparation, the electrofusion welder will guide the operator through the steps to take. Welding heat and time is dependent on the type and size of the fitting. All electrofusion fittings are not created equal – precise positioning of the energising coils of wire in each fitting ensures uniform melting for a strong joint and the minimisation of welding and cooling time. The operator must be qualified according to the local and national laws. In Australia, an electrofusion course can be done within 8 hours. Electrofusion welding training focuses on the importance of accurately fusing EF fittings. Both manual and automatic methods of calculating electrofusion time gives operators the skills they need in the field. There is much to learn about the importance of preparation, timing, pressure, temperature, cool down time and handling, etc. Training and certification are very important in this field of welding, as the product can become dangerous under certain circumstances. There has been cases of major harm and death, including when molten polyethylene spurts out of the edge of a mis-aligned weld, causing skin burns. Another case was due to a tapping saddle being incorrectly installed on a gas line, causing the death of the two welders in the trench due to gas inhalation. There are many critical parts to electrofusion welding that can cause weld failures, most of which can be greatly reduced by using welding clamps, and correct scraping equipment. To keep their qualification current, a trained operator can get their fitting tested, which involves cutting open the fitting and examining the integrity of the weld. References Piping Plumbing
Electrofusion
[ "Chemistry", "Engineering" ]
470
[ "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Mechanical engineering", "Piping" ]
4,085,687
https://en.wikipedia.org/wiki/Thermoplastic%20elastomer
Thermoplastic elastomers (TPE), sometimes referred to as thermoplastic rubbers (TPR), are a class of copolymers or a physical mix of polymers (usually a plastic and a rubber) that consist of materials with both thermoplastic and elastomeric properties. While most elastomers are thermosets, thermoplastic elastomers are not, in contrast making them relatively easy to use in manufacturing, for example, by injection moulding. Thermoplastic elastomers show advantages typical of both rubbery materials and plastic materials. The benefit of using thermoplastic elastomers is the ability to stretch to moderate elongations and return to its near original shape creating a longer life and better physical range than other materials. The principal difference between thermoset elastomers and thermoplastic elastomers is the type of cross-linking bond in their structures. In fact, crosslinking is a critical structural factor which imparts high elastic properties. Types There are six generic classes of commercial TPEs (designations according to ISO 18064) together with one unclassified category: Styrenic block copolymers, TPS (TPE-s) Thermoplastic polyolefinelastomers, TPO (TPE-o) Thermoplastic vulcanizates, TPV (TPE-v or TPV) Thermoplastic polyurethanes, TPU (TPU) Thermoplastic copolyester, TPC (TPE-E) Thermoplastic polyamides, TPA (TPE-A) Unclassified thermoplastic elastomers, TPZ Examples TPE materials that come from the block copolymers group include CAWITON†, MELIFLEX, THERMOLAST K†, THERMOLAST M†, Chemiton, Arnitel, Hytrel, Dryflex†, Mediprene, Kraton, Pibiflex, Sofprene†, Tuftec†and Laprene†. † indicates styrenic block copolymers (TPE-s). Laripur, Desmopan, Estane, Texin and Elastollan are examples of thermoplastic polyurethanes (TPU). Sarlink, Santoprene, Termoton, Solprene, THERMOLAST V, Vegaprene, and Forprene are examples of TPV materials. Examples of thermoplastic olefin elastomers (TPO) compound are For-Tec E or Engage. Ninjaflex used for 3D printing. Criteria for thermoplastic elastomers In order to qualify as a thermoplastic elastomer, a material must have these three essential characteristics: The ability to be stretched to moderate elongations and, upon the removal of stress, return to something close to its original shape Processable as a melt at elevated temperature Absence of significant creep History TPE became a commercial reality when thermoplastic polyurethane polymers became available in the 1950s. During the 1960s styrene block copolymer became available, and in the 1970s a wide range of TPEs came on the scene. The worldwide usage of TPEs (680,000 tons/year in 1990) is growing at about nine percent per year. Microstructure The styrene-butadiene materials possess a two-phase microstructure due to incompatibility between the polystyrene and polybutadiene blocks, the former separating into spheres or rods depending on the exact composition. With low polystyrene content, the material is elastomeric with the properties of the polybutadiene predominating. Generally they offer a much wider range of properties than conventional cross-linked rubbers because the composition can vary to suit final construction goals. Block copolymers can "microphase separate" to form periodic nanostructures, as in the styrene-butadiene-styrene (SBS) block copolymer (shown at right). The polymer is known as Kraton and is used for shoe soles and adhesives. Owing to the microfine structure, a transmission electron microscope (TEM) was needed to examine the structure. The butadiene matrix was stained with osmium tetroxide to provide contrast in the image. The material was made by living polymerization so that the blocks are almost monodisperse, so helping to create a very regular microstructure. The molecular weight of the polystyrene blocks in the main picture is 102,000; the inset picture has a molecular weight of 91,000, producing slightly smaller domains. The spacing between domains has been confirmed by small-angle X-ray scattering, a technique which gives information about microstructure. Since most polymers are incompatible with one another, forming a block polymer will usually result in phase separation, and the principle has been widely exploited since the introduction of the SBS block polymers, especially where one of the block is highly crystalline. One exception to the rule of incompatibility is the material Noryl, where polystyrene and polyphenylene oxide or PPO form a continuous blend with one another. Other TPEs have crystalline domains where one kind of block co-crystallizes with other block in adjacent chains, such as in copolyester rubbers, achieving the same effect as in the SBS block polymers. Depending on the block length, the domains are generally more stable than the latter owing to the higher crystal melting point. That point determines the processing temperatures needed to shape the material, as well as the ultimate service use temperatures of the product. Such materials include Hytrel, a polyester-polyether copolymer and Pebax, a nylon or polyamide-polyether copolymer. Advantages Depending on the environment, TPEs have outstanding thermal properties and material stability when exposed to a broad range of temperatures and non-polar materials. TPEs consume less energy to produce, can be colored easily by most dyes, and allow economical quality control. TPE requires little or no compounding, with no need to add reinforcing agents, stabilizers or cure systems. Hence, batch-to-batch variations in weighting and metering components are absent, leading to improved consistency in both raw materials and fabricated articles. TPE materials have the potential to be recyclable since they can be molded, extruded and reused like plastics, but they have typical elastic properties of rubbers which are not recyclable owing to their thermosetting characteristics. They can also be ground up and turned into 3D printing filament with a recyclebot. Processing The two most important manufacturing methods with TPEs are extrusion and injection molding. TPEs can now be 3D printed and have been shown to be economically advantageous to make products using distributed manufacturing. Compression molding is seldom, if ever, used. Fabrication via injection molding is extremely rapid and highly economical. Both the equipment and methods normally used for the extrusion or injection molding of a conventional thermoplastic are generally suitable for TPEs. TPEs can also be processed by blow molding, melt calendaring, thermoforming, and heat welding. Applications TPEs are used where conventional elastomers cannot provide the range of physical properties needed in the product. These materials find large application in the automotive sector and in household appliances sector. For instance, copolyester TPEs are used in snowmobile tracks where stiffness and abrasion resistance are at a premium. Thermoplastic olefins (TPO) are increasingly used as a roofing material. TPEs are also widely used for catheters where nylon block copolymers offer a range of softness ideal for patients. Thermoplastic silicone and olefin blends are used for extrusion of glass run and dynamic weatherstripping car profiles. Styrene block copolymers are used in shoe soles for their ease of processing, and widely as adhesives. Owing to their unrivaled abilities in two-component injection molding to various thermoplastic substrates, engineered TPS materials also cover a broad range of technical applications ranging from automotive market to consumer and medical products. Examples of those are soft grip surfaces, design elements, back-lit switches and surfaces, as well as sealings, gaskets, or damping elements. TPE is commonly used to make suspension bushings for automotive performance applications because of its greater resistance to deformation when compared to regular rubber bushings. Thermoplastics have experienced growth in the heating, ventilation, and air conditioning (HVAC) industry due to the function, cost effectiveness and adaptability to modify plastic resins into a variety of covers, fans and housings. References Further reading PR Lewis and C Price, Polymer, 13, 20 (1972) Modern Plastic Mid-October Encyclopedia Issue, Introduction to TPEs, page:109-110 Latest Material and Technological Developments for Activewear, (Joanne Yip, 2020, page 66-67) Biomaterials Polymers
Thermoplastic elastomer
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
1,941
[ "Biomaterials", "Materials", "Polymer chemistry", "Polymers", "Matter", "Medical technology" ]
567,472
https://en.wikipedia.org/wiki/Cloud%20condensation%20nuclei
Cloud condensation nuclei (CCNs), also known as cloud seeds, are small particles typically 0.2 μm, or one hundredth the size of a cloud droplet. CCNs are a unique subset of aerosols in the atmosphere on which water vapour condenses. This can affect the radiative properties of clouds and the overall atmosphere. Water vapour requires a non-gaseous surface to make the transition to a liquid; this process is called condensation. In the atmosphere of Earth, this surface presents itself as tiny solid or liquid particles called CCNs. When no CCNs are present, water vapour can be supercooled at about for 5–6 hours before droplets spontaneously form. This is the basis of the cloud chamber for detecting subatomic particles. The concept of CCN is used in cloud seeding, which tries to encourage rainfall by seeding the air with condensation nuclei. It has further been suggested that creating such nuclei could be used for marine cloud brightening, a climate engineering technique. Some natural environmental phenomena, such as the one proposed in the CLAW hypothesis also arise from the interaction between naturally produced CCNs and cloud formation. Properties Size A typical raindrop is about 2 mm in diameter, a typical cloud droplet is on the order of 0.02 mm, and a typical cloud condensation nucleus (aerosol) is on the order of 0.0001 mm or 0.1 μm or greater in diameter. The number of cloud condensation nuclei in the air can be measured at ranges between around 100 to 1000 per cm3. The total mass of CCNs injected into the atmosphere has been estimated at over a year's time. Composition There are many different types of atmospheric particulates that can act as CCN. The particles may be composed of dust or clay, soot or black carbon from grassland or forest fires, sea salt from ocean wave spray, soot from factory smokestacks or internal combustion engines, sulfate from volcanic activity, phytoplankton or the oxidation of sulfur dioxide and secondary organic matter formed by the oxidation of volatile organic compounds. The ability of these different types of particles to form cloud droplets varies according to their size and also their exact composition, as the hygroscopic properties of these different constituents are very different. Sulfate and sea salt, for instance, readily absorb water whereas soot, organic carbon, and mineral particles do not. This is made even more complicated by the fact that many of the chemical species may be mixed within the particles (in particular the sulfate and organic carbon). Additionally, while some particles (such as soot and minerals) do not make very good CCN, they do act as ice nuclei in colder parts of the atmosphere. Abundance The number and type of CCNs can affect the precipitation amount, lifetimes, and radiative properties of clouds and their lifetimes. Ultimately, this has an influence on climate change. Modeling research led by Marcia Baker revealed that sources and sinks are balanced by coagulation and coalescence which leads to stable levels of CCNs in the atmosphere. There is also speculation that solar variation may affect cloud properties via CCNs, and hence affect climate. Airborne Measurements The airborne measurements of these individual mixed aerosols that can form CCN at SGP site were performed using a research aircraft. CCN study by Kulkarni et al 2023 describes the complexity in modeling CCN concentrations. Applications Cloud seeding Cloud seeding is a process by which small particulates are added to the atmosphere to induce cloud formation and precipitation. This has been done by dispersing salts using aerial or ground-based methods. Other methods have been researched, like using laser pulses to excite molecules in the atmosphere, and more recently, in 2021, electric charge emission using drones. The effectiveness of these methods is not consistent. Many studies did not notice a statistically significant difference in precipitation while others have. Cloud seeding may also occur from natural processes such as forest fires, which release small particles into the atmosphere that can act as nuclei. Marine cloud brightening Marine cloud brightening is a climate engineering technique which involves the injection of small particles into clouds to enhance their reflectivity, or albedo. The motive behind this technique is to control the amount of sunlight allowed to reach ocean surfaces in hopes of lowering surface temperatures through radiative forcing. Many methods involve the creation of small droplets of seawater to deliver sea salt particles into overlying clouds. Complications may arise when reactive chlorine and bromine from sea salt react with existing molecules in the atmosphere. They have been shown to reduce ozone in the atmosphere; the same effect reduces hydroxide which correlates to the increased longevity of methane, a greenhouse gas. Relation with phytoplankton and climate A 1987 article in Nature found that global climate may occur in a feedback loop due to the relationship between CCNs, the temperature regulating behaviors of clouds, and oceanic phytoplankton. This phenomenon has since been referred to as the CLAW hypothesis, after the authors of the original study. A common CCN over oceans is sulphate aerosols. These aerosols are formed from the dimethyl sulfide (DMS) produced by algae found in seawater. Large algal blooms, observed to have increased in areas such as the South China Sea, can contribute a substantial amount of DMS into their surrounding atmospheres, leading to increased cloud formation. As the activity of phytoplankton is temperature reliant, this negative-feedback loop can act as a form of climate regulation. The Revenge of Gaia, written by James Lovelock, an author of the 1987 study, proposes an alternative relationship between ocean temperatures and phytoplankton population size. This has been named the anti-CLAW hypothesis In this scenario, the stratification of oceans causes nutrient-rich cold water to become trapped under warmer water, where sunlight for photosynthesis is most abundant. This inhibits the growth of phytoplankton, resulting in the decrease in their population, and the sulfate CCNs they produce, with increasing temperature. This interaction thus lowers cloud albedo through decreasing CCN-induced cloud formations and increases the solar radiation allowed to reach ocean surfaces, resulting in a positive-feedback loop. From volcanoes Volcanoes emit a significant amount of microscopic gas and ash particles into the atmosphere when they erupt, which become atmospheric aerosols. By increasing the number of aerosol particles through gas-to-particle conversion processes, the contents of these eruptions can then affect the concentrations of potential cloud condensation nuclei (CCN) and ice nucleating particles (INP), which in turn affects cloud properties and leads to changes in local or regional climate. Of these gases, sulfur dioxide, carbon dioxide, and water vapour are most commonly found in volcanic eruptions. While water vapour and carbon dioxide CCNs are naturally abundant in the atmosphere, the increase of sulfur dioxide CCNs can impact the climate by causing global cooling. Almost 9.2 Tg of sulfur dioxide () is emitted from volcanoes annually. This sulphur dioxide undergoes a transformation into sulfuric acid, which quickly condenses in the stratosphere to produce fine sulphate aerosols. The Earth's lower atmosphere, or troposphere, cools as a result of the aerosols' increased capability to reflect solar radiation back into space. Effect on air pollution See also Bergeron process Contrail Evapotranspiration Global dimming Nucleation Seed crystal Water cycle References Further reading Fletcher, Neville H. (2011). The physics of rainclouds (Paperback ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-15479-6. OCLC 85709529 External links www.grida.no An easy experiment to do at home (in French) Cloud and fog physics Particulates
Cloud condensation nuclei
[ "Chemistry" ]
1,625
[ "Particulates", "Particle technology" ]
567,580
https://en.wikipedia.org/wiki/Gaussian%20integral
The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809, attributing its discovery to Laplace. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function. Although no elementary function exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary indefinite integral for but the definite integral can be evaluated. The definite integral of an arbitrary Gaussian function is Computation By polar coordinates A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, is to make use of the property that: Consider the function on the plane , and compute its integral two ways: on the one hand, by double integration in the Cartesian coordinate system, its integral is a square: on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be Comparing these two computations yields the integral, though one should take care about the improper integrals involved. where the factor of is the Jacobian determinant which appears because of the transform to polar coordinates ( is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking , so . Combining these yields so Complete proof To justify the improper double integrals and equating the two expressions, we begin with an approximating function: If the integral were absolutely convergent we would have that its Cauchy principal value, that is, the limit would coincide with To see that this is the case, consider that So we can compute by just taking the limit Taking the square of yields Using Fubini's theorem, the above double integral can be seen as an area integral taken over a square with vertices on the xy-plane. Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than , and similarly the integral taken over the square's circumcircle must be greater than . The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates: (See to polar coordinates from Cartesian coordinates for help with polar transformation.) Integrating, By the squeeze theorem, this gives the Gaussian integral By Cartesian coordinates A different technique, which goes back to Laplace (1812), is the following. Let Since the limits on as depend on the sign of , it simplifies the calculation to use the fact that is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is, Thus, over the range of integration, , and the variables and have the same limits. This yields: Then, using Fubini's theorem to switch the order of integration: Therefore, , as expected. By Laplace's method In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider . In fact, since for all , we have the exact bounds:Then we can do the bound at Laplace approximation limit: That is, By trigonometric substitution, we exactly compute those two bounds: and By taking the square root of the Wallis formula, we have , the desired lower bound limit. Similarly we can get the desired upper bound limit. Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula. Relation to the gamma function The integrand is an even function, Thus, after the change of variable , this turns into the Euler integral where is the gamma function. This shows why the factorial of a half-integer is a rational multiple of . More generally, which can be obtained by substituting in the integrand of the gamma function to get . Generalizations The integral of a Gaussian function The integral of an arbitrary Gaussian function is An alternative form is This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution, for example. Complex form and more generally,for any positive-definite symmetric matrix . n-dimensional and functional generalization Suppose A is a symmetric positive-definite (hence invertible) precision matrix, which is the matrix inverse of the covariance matrix. Then, By completing the square, this generalizes to This fact is applied in the study of the multivariate normal distribution. Also, where σ is a permutation of and the extra factor on the right-hand side is the sum over all combinatorial pairings of of N copies of A−1. Alternatively, for some analytic function f, provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series. While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios: In the DeWitt notation, the equation looks identical to the finite-dimensional case. n-dimensional with linear term If A is again a symmetric positive-definite matrix, then (assuming all are column vectors) Integrals of similar form where is a positive integer An easy way to derive these is by differentiating under the integral sign. One could also integrate by parts and find a recurrence relation to solve this. Higher-order polynomials Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on SL(n)-invariants of the polynomial. One such invariant is the discriminant, zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants. Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is The mod 2 requirement is because the integral from −∞ to 0 contributes a factor of to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory. See also List of integrals of Gaussian functions Common integrals in quantum field theory Normal distribution List of integrals of exponential functions Error function Berezin integral References Citations Sources Integrals Articles containing proofs Gaussian function Theorems in analysis
Gaussian integral
[ "Mathematics" ]
1,578
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Articles containing proofs", "Mathematical problems" ]
567,946
https://en.wikipedia.org/wiki/Kelvin%E2%80%93Helmholtz%20instability
The Kelvin–Helmholtz instability (after Lord Kelvin and Hermann von Helmholtz) is a fluid instability that occurs when there is velocity shear in a single continuous fluid or a velocity difference across the interface between two fluids. Kelvin-Helmholtz instabilities are visible in the atmospheres of planets and moons, such as in cloud formations on Earth or the Red Spot on Jupiter, and the atmospheres of the Sun and other stars. Theory overview and mathematical concepts Fluid dynamics predicts the onset of instability and transition to turbulent flow within fluids of different densities moving at different speeds. If surface tension is ignored, two fluids in parallel motion with different velocities and densities yield an interface that is unstable to short-wavelength perturbations for all speeds. However, surface tension is able to stabilize the short wavelength instability up to a threshold velocity. If the density and velocity vary continuously in space (with the lighter layers uppermost, so that the fluid is RT-stable), the dynamics of the Kelvin-Helmholtz instability is described by the Taylor–Goldstein equation: where denotes the Brunt–Väisälä frequency, U is the horizontal parallel velocity, k is the wave number, c is the eigenvalue parameter of the problem, is complex amplitude of the stream function. Its onset is given by the Richardson number . Typically the layer is unstable for . These effects are common in cloud layers. The study of this instability is applicable in plasma physics, for example in inertial confinement fusion and the plasma–beryllium interface. In situations where there is a state of static stability (where there is a continuous density gradient), the Rayleigh-Taylor instability is often insignificant compared to the magnitude of the Kelvin–Helmholtz instability. Numerically, the Kelvin–Helmholtz instability is simulated in a temporal or a spatial approach. In the temporal approach, the flow is considered in a periodic (cyclic) box "moving" at mean speed (absolute instability). In the spatial approach, simulations mimic a lab experiment with natural inlet and outlet conditions (convective instability). Discovery and history The existence of the Kelvin-Helmholtz instability was first discovered by German physiologist and physicist Hermann von Helmholtz in 1868. Helmholtz identified that "every perfect geometrically sharp edge by which a fluid flows must tear it asunder and establish a surface of separation". Following that work, in 1871, collaborator William Thomson (later Lord Kelvin), developed a mathematical solution of linear instability whilst attempting to model the formation of ocean wind waves. Throughout the early 20th Century, the ideas of Kelvin-Helmholtz instabilities were applied to a range of stratified fluid applications. In the early 1920s, Lewis Fry Richardson developed the concept that such shear instability would only form where shear overcame static stability due to stratification, encapsulated in the Richardson Number. Geophysical observations of the Kelvin-Helmholtz instability were made through the late 1960s/early 1970s, for clouds, and later the ocean. See also Rayleigh–Taylor instability Richtmyer–Meshkov instability Mushroom cloud Plateau–Rayleigh instability Kármán vortex street Taylor–Couette flow Fluid mechanics Fluid dynamics Reynolds number Turbulence Notes References Article describing discovery of K-H waves in deep ocean: External links Giant Tsunami-Shaped Clouds Roll Across Alabama Sky - Natalie Wolchover, Livescience via Yahoo.com Tsunami Cloud Hits Florida Coastline Vortex formation in free jet - YouTube video showing Kelvin Helmholtz waves on the edge of a free jet visualised in a scientific experiment. Wave clouds over Christchurch City Kelvin-Helmholtz clouds, in Barmouth, Gwynedd, on 18 February 2017 1868 introductions 1868 in science Fluid dynamics Boundary layer meteorology Clouds Fluid dynamic instabilities Articles containing video clips Hermann von Helmholtz William Thomson, 1st Baron Kelvin Plasma instabilities
Kelvin–Helmholtz instability
[ "Physics", "Chemistry", "Engineering" ]
791
[ "Physical phenomena", "Fluid dynamic instabilities", "Chemical engineering", "Plasma phenomena", "Plasma instabilities", "Piping", "Fluid dynamics" ]
568,674
https://en.wikipedia.org/wiki/Valence%20electron
In chemistry and physics, valence electrons are electrons in the outermost shell of an atom, and that can participate in the formation of a chemical bond if the outermost shell is not closed. In a single covalent bond, a shared pair forms with both atoms in the bond each contributing one valence electron. The presence of valence electrons can determine the element's chemical properties, such as its valence—whether it may bond with other elements and, if so, how readily and with how many. In this way, a given element's reactivity is highly dependent upon its electronic configuration. For a main-group element, a valence electron can exist only in the outermost electron shell; for a transition metal, a valence electron can also be in an inner shell. An atom with a closed shell of valence electrons (corresponding to a noble gas configuration) tends to be chemically inert. Atoms with one or two valence electrons more than a closed shell are highly reactive due to the relatively low energy to remove the extra valence electrons to form a positive ion. An atom with one or two electrons fewer than a closed shell is reactive due to its tendency either to gain the missing valence electrons and form a negative ion, or else to share valence electrons and form a covalent bond. Similar to a core electron, a valence electron has the ability to absorb or release energy in the form of a photon. An energy gain can trigger the electron to move (jump) to an outer shell; this is known as atomic excitation. Or the electron can even break free from its associated atom's shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied. Overview Electron configuration The electrons that determine valence – how an atom reacts chemically – are those with the highest energy. For a main-group element, the valence electrons are defined as those electrons residing in the electronic shell of highest principal quantum number n. Thus, the number of valence electrons that it may have depends on the electron configuration in a simple way. For example, the electronic configuration of phosphorus (P) is 1s2 2s2 2p6 3s2 3p3 so that there are 5 valence electrons (3s2 3p3), corresponding to a maximum valence for P of 5 as in the molecule PF5; this configuration is normally abbreviated to [Ne] 3s2 3p3, where [Ne] signifies the core electrons whose configuration is identical to that of the noble gas neon. However, transition elements have (n−1)d energy levels that are very close in energy to the n level. So as opposed to main-group elements, a valence electron for a transition metal is defined as an electron that resides outside a noble-gas core. Thus, generally, the d electrons in transition metals behave as valence electrons although they are not in the outermost shell. For example, manganese (Mn) has configuration 1s2 2s2 2p6 3s2 3p6 4s2 3d5; this is abbreviated to [Ar] 4s2 3d5, where [Ar] denotes a core configuration identical to that of the noble gas argon. In this atom, a 3d electron has energy similar to that of a 4s electron, and much higher than that of a 3s or 3p electron. In effect, there are possibly seven valence electrons (4s2 3d5) outside the argon-like core; this is consistent with the chemical fact that manganese can have an oxidation state as high as +7 (in the permanganate ion: ). (But note that merely having that number of valence electrons does not imply that the corresponding oxidation state will exist. For example, fluorine is not known in oxidation state +7; and although the maximum known number of valence electrons is 16 in ytterbium and nobelium, no oxidation state higher than +9 is known for any element.) The farther right in each transition metal series, the lower the energy of an electron in a d subshell and the less such an electron has valence properties. Thus, although a nickel atom has, in principle, ten valence electrons (4s2 3d8), its oxidation state never exceeds four. For zinc, the 3d subshell is complete in all known compounds, although it does contribute to the valence band in some compounds. Similar patterns hold for the (n−2)f energy levels of inner transition metals. The d electron count is an alternative tool for understanding the chemistry of a transition metal. The number of valence electrons The number of valence electrons of an element can be determined by the periodic table group (vertical column) in which the element is categorized. In groups 1–12, the group number matches the number of valence electrons; in groups 13–18, the units digit of the group number matches the number of valence electrons. (Helium is the sole exception.) Helium is an exception: despite having a 1s2 configuration with two valence electrons, and thus having some similarities with the alkaline earth metals with their ns2 valence configurations, its shell is completely full and hence it is chemically very inert and is usually placed in group 18 with the other noble gases. Valence shell The valence shell is the set of orbitals which are energetically accessible for accepting electrons to form chemical bonds. For main-group elements, the valence shell consists of the ns and np orbitals in the outermost electron shell. For transition metals the orbitals of the incomplete (n−1)d subshell are included, and for lanthanides and actinides incomplete (n−2)f and (n−1)d subshells. The orbitals involved can be in an inner electron shell and do not all correspond to the same electron shell or principal quantum number n in a given element, but they are all at similar energies. As a general rule, a main-group element (except hydrogen or helium) tends to react to form a s2p6 electron configuration. This tendency is called the octet rule, because each bonded atom has 8 valence electrons including shared electrons. Similarly, a transition metal tends to react to form a d10s2p6 electron configuration. This tendency is called the 18-electron rule, because each bonded atom has 18 valence electrons including shared electrons. The heavy group 2 elements calcium, strontium, and barium can use the (n−1)d subshell as well, giving them some similarities to transition metals. Chemical reactions The number of valence electrons in an atom governs its bonding behavior. Therefore, elements whose atoms have the same number of valence electrons are often grouped together in the periodic table of the elements, especially if they also have the same types of valence orbitals. The most reactive kind of metallic element is an alkali metal of group 1 (e.g., sodium or potassium); this is because such an atom has only a single valence electron. During the formation of an ionic bond, which provides the necessary ionization energy, this one valence electron is easily lost to form a positive ion (cation) with a closed shell (e.g., Na+ or K+). An alkaline earth metal of group 2 (e.g., magnesium) is somewhat less reactive, because each atom must lose two valence electrons to form a positive ion with a closed shell (e.g., Mg2+). Within each group (each periodic table column) of metals, reactivity increases with each lower row of the table (from a light element to a heavier element), because a heavier element has more electron shells than a lighter element; a heavier element's valence electrons exist at higher principal quantum numbers (they are farther away from the nucleus of the atom, and are thus at higher potential energies, which means they are less tightly bound). A nonmetal atom tends to attract additional valence electrons to attain a full valence shell; this can be achieved in one of two ways: An atom can either share electrons with a neighboring atom (a covalent bond), or it can remove electrons from another atom (an ionic bond). The most reactive kind of nonmetal element is a halogen (e.g., fluorine (F) or chlorine (Cl)). Such an atom has the following electron configuration: s2p5; this requires only one additional valence electron to form a closed shell. To form an ionic bond, a halogen atom can remove an electron from another atom in order to form an anion (e.g., F−, Cl−, etc.). To form a covalent bond, one electron from the halogen and one electron from another atom form a shared pair (e.g., in the molecule H–F, the line represents a shared pair of valence electrons, one from H and one from F). Within each group of nonmetals, reactivity decreases with each lower row of the table (from a light element to a heavy element) in the periodic table, because the valence electrons are at progressively higher energies and thus progressively less tightly bound. In fact, oxygen (the lightest element in group 16) is the most reactive nonmetal after fluorine, even though it is not a halogen, because the valence shells of the heavier halogens are at higher principal quantum numbers. In these simple cases where the octet rule is obeyed, the valence of an atom equals the number of electrons gained, lost, or shared in order to form the stable octet. However, there are also many molecules that are exceptions, and for which the valence is less clearly defined. Electrical conductivity Valence electrons are also responsible for the bonding in the pure chemical elements, and whether their electrical conductivity is characteristic of metals, semiconductors, or insulators. Metallic elements generally have high electrical conductivity when in the solid state. In each row of the periodic table, the metals occur to the left of the nonmetals, and thus a metal has fewer possible valence electrons than a nonmetal. However, a valence electron of a metal atom has a small ionization energy, and in the solid-state this valence electron is relatively free to leave one atom in order to associate with another nearby. This situation characterises metallic bonding. Such a "free" electron can be moved under the influence of an electric field, and its motion constitutes an electric current; it is responsible for the electrical conductivity of the metal. Copper, aluminium, silver, and gold are examples of good conductors. A nonmetallic element has low electrical conductivity; it acts as an insulator. Such an element is found toward the right of the periodic table, and it has a valence shell that is at least half full (the exception is boron). Its ionization energy is large; an electron cannot leave an atom easily when an electric field is applied, and thus such an element can conduct only very small electric currents. Examples of solid elemental insulators are diamond (an allotrope of carbon) and sulfur. These form covalently bonded structures, either with covalent bonds extending across the whole structure (as in diamond) or with individual covalent molecules weakly attracted to each other by intermolecular forces (as in sulfur). (The noble gases remain as single atoms, but those also experience intermolecular forces of attraction, that become stronger as the group is descended: helium boils at −269 °C, while radon boils at −61.7 °C.) A solid compound containing metals can also be an insulator if the valence electrons of the metal atoms are used to form ionic bonds. For example, although elemental sodium is a metal, solid sodium chloride is an insulator, because the valence electron of sodium is transferred to chlorine to form an ionic bond, and thus that electron cannot be moved easily. A semiconductor has an electrical conductivity that is intermediate between that of a metal and that of a nonmetal; a semiconductor also differs from a metal in that a semiconductor's conductivity increases with temperature. The typical elemental semiconductors are silicon and germanium, each atom of which has four valence electrons. The properties of semiconductors are best explained using band theory, as a consequence of a small energy gap between a valence band (which contains the valence electrons at absolute zero) and a conduction band (to which valence electrons are excited by thermal energy). References External links Francis, Eden. Valence Electrons. Chemical bonding Electron states
Valence electron
[ "Physics", "Chemistry", "Materials_science" ]
2,651
[ "Electron", "Condensed matter physics", "nan", "Chemical bonding", "Electron states" ]
568,824
https://en.wikipedia.org/wiki/Desert%20Research%20Institute
Desert Research Institute (DRI) is a nonprofit research campus of the Nevada System of Higher Education (NSHE) and a sister property of the University of Nevada, Reno (UNR), the organization that oversees all publicly supported higher education in the U.S. state of Nevada. At DRI, approximately 500 research faculty and support staff engage in more than $50 million in environmental research each year. DRI's environmental research programs are divided into three core divisions (Atmospheric Sciences, Earth and Ecosystem Sciences, and Hydrologic Sciences) and two interdisciplinary centers (Center for Arid Lands Environmental Management and the Center for Watersheds and Environmental Sustainability). Established in 1988 and sponsored by AT&T, the institute's Nevada Medal awards "outstanding achievement in science and engineering". Programs Cloud Seeding Program DRI weather modification research produced the Nevada State Cloud Seeding Program in the 1960s. This initiative, funded by the U.S. Bureau of Reclamation and the National Oceanic and Atmospheric Administration, seeks to augment snowfall in mountainous regions of Nevada to increase snowpack and water supply. DRI researchers use ground stations and aircraft to release microscopic silver iodide particles into winter clouds, stimulating the formation of ice crystals that develop to snow. Research indicates that cloud seeding leads to precipitation rate increases of 0.1–1.5 millimeters per hour. Atmospheric and Dispersion Modeling Program For over a decade the Atmospheric and Dispersion Modeling Program team has been performing work focused on observations and modeling of atmospheric dispersion processes over complex terrain and coastal areas. In particular, the team is applying, developing, and evaluating mesoscale meteorological models as well as regulatory and advanced atmospheric dispersion models such as ISC3ST, AERMOD, WYNDVALLEY, ASPEN, and CALPUFF. They have developed a Lagrangian Random Particle Dispersion Model that has been applied to complex coastal and inland environments. Several recent projects led to developing real-time mesoscale forecasting system using the MM5 model coupled with a Lagrangian random particle dispersion model and implementation of data assimilation schemes. History A two-page bill signed into law by the Nevada Governor Grant Sawyer on March 23, 1959, authorized establishment of the Desert Research Institute at the University of Nevada, Reno. UNR hired Dr. Wendell Mordy as the Founding Director (1960–1969) of the University's Desert Research Institute, which initially was an office at the top of the historic Morrill Hall building on UNR's campus. Early on Mordy also initiated the development of the UNR's Fleishmann Atmospherium Planetarium. Microplastics were found for the first time in Lake Tahoe in 2019 by the Desert Research Institute. They plan on studying the pollution to determine if it is from local sources or if particles from discarded plastic products have been transported long distances through the atmosphere by wind, rain and falling snow. Campuses Main research campuses Dandini Research Park – Reno, Nevada . Southern Nevada Science Park – Paradise, Nevada . Subsidiary campuses Boulder City Research Facility – Boulder City, Nevada. Storm Peak Laboratory – Steamboat Springs, Colorado. Stead Research Facility - Reno, Nevada See also Atmospheric dispersion modeling List of atmospheric dispersion models Notes References 1959 establishments in Nevada 1988 establishments in Nevada Atmospheric dispersion modeling Buildings and structures in Paradise, Nevada Education in Reno, Nevada Educational institutions established in 1959 Educational institutions established in 1988 Meteorological research institutes Nevada System of Higher Education Nuclear research institutes Universities and colleges in Clark County, Nevada Public universities and colleges in Nevada Environmental research institutes
Desert Research Institute
[ "Chemistry", "Engineering", "Environmental_science" ]
734
[ "Nuclear research institutes", "Nuclear organizations", "Environmental research institutes", "Atmospheric dispersion modeling", "Environmental engineering", "Environmental modelling", "Environmental research" ]
569,263
https://en.wikipedia.org/wiki/Acute-phase%20protein
Acute-phase proteins (APPs) are a class of proteins whose concentrations in blood plasma either increase (positive acute-phase proteins) or decrease (negative acute-phase proteins) in response to inflammation. This response is called the acute-phase reaction (also called acute-phase response). The acute-phase reaction characteristically involves fever, acceleration of peripheral leukocytes, circulating neutrophils and their precursors. The terms acute-phase protein and acute-phase reactant (APR) are often used synonymously, although some APRs are (strictly speaking) polypeptides rather than proteins. In response to injury, local inflammatory cells (neutrophil granulocytes and macrophages) secrete a number of cytokines into the bloodstream, most notable of which are the interleukins IL1, and IL6, and TNF-α. The liver responds by producing many acute-phase reactants. At the same time, the production of a number of other proteins is reduced; these proteins are, therefore, referred to as "negative" acute-phase reactants. Increased acute-phase proteins from the liver may also contribute to the promotion of sepsis. Regulation of synthesis TNF-α, IL-1β and IFN-γ are important for the expression of inflammatory mediators such as prostaglandins and leukotrienes, and they also cause the production of platelet-activating factor and IL-6. After stimulation with proinflammatory cytokines, Kupffer cells produce IL-6 in the liver and present it to the hepatocytes. IL-6 is the major mediator for the hepatocytic secretion of APPs. Synthesis of APP can also be regulated indirectly by cortisol. Cortisol can enhance expression of IL-6 receptors in liver cells and induce IL-6-mediated production of APPs. Positive Positive acute-phase proteins serve (as part of the innate immune system) different physiological functions within the immune system. Some act to destroy or inhibit growth of microbes, e.g., C-reactive protein, mannose-binding protein, complement factors, ferritin, ceruloplasmin, serum amyloid A and haptoglobin. Others give negative feedback on the inflammatory response, e.g. serpins. Alpha 2-macroglobulin and coagulation factors affect coagulation, mainly stimulating it. This pro-coagulant effect may limit infection by trapping pathogens in local blood clots. Also, some products of the coagulation system can contribute to the innate immune system by their ability to increase vascular permeability and act as chemotactic agents for phagocytic cells. Negative "Negative" acute-phase proteins decrease in inflammation. Examples include albumin, transferrin, transthyretin, retinol-binding protein, antithrombin, transcortin. The decrease of such proteins may be used as markers of inflammation. The physiological role of decreased synthesis of such proteins is generally to save amino acids for producing "positive" acute-phase proteins more efficiently. Theoretically, a decrease in transferrin could additionally be decreased by an upregulation of transferrin receptors, but the latter does not appear to change with inflammation. While the production of C3 (a complement factor) increases in the liver, the plasma concentration often lowers because of an increased turn-over, therefore it is often seen as a negative acute-phase protein. Clinical significance Measurement of acute-phase proteins, especially C-reactive protein, is a useful marker of inflammation in both medical and veterinary clinical pathology. It correlates with the erythrocyte sedimentation rate (ESR), however not always directly. This is due to the ESR being largely dependent on the elevation of fibrinogen, an acute phase reactant with a half-life of approximately one week. This protein will therefore remain higher for longer despite the removal of the inflammatory stimuli. In contrast, C-reactive protein (with a half-life of 6–8 hours) rises rapidly and can quickly return to within the normal range if treatment is employed. For example, in active systemic lupus erythematosus, one may find a raised ESR but normal C-reactive protein.They may also indicate liver failure. References External links http://eclinpath.com/chemistry/proteins/acute-phase-proteins/ Immune system
Acute-phase protein
[ "Biology" ]
924
[ "Immune system", "Organ systems" ]
569,403
https://en.wikipedia.org/wiki/Pic%20du%20Midi%20de%20Bigorre
The Pic du Midi de Bigorre or simply the Pic du Midi (elevation ) is a mountain in the French Pyrenees. It is the site of the Pic du Midi Observatory. Pic du Midi Observatory The Pic du Midi Observatory () is an astronomical observatory located at 2,877 meters on top of the Pic du Midi de Bigorre in the French Pyrenees. It is part of the Observatoire Midi-Pyrénées (OMP) which has additional research stations in the southwestern French towns of Tarbes, Lannemezan, and Auch, as well as many partnerships in South America, Africa, and Asia, due to the guardianship it receives from the French Research Institute for Development (IRD). Construction of the observatory began in 1878 under the auspices of the Société Ramond, but by 1882 the society decided that the spiralling costs were beyond its relatively modest means, and yielded the observatory to the French state, which took it into its possession by a law of 7 August 1882. The 8 metre dome was completed in 1908, under the ambitious direction of Benjamin Baillaud. It housed a powerful mechanical equatorial reflector which was used in 1909 to formally discredit the Martian canal theory. In 1946 Mr. Gentilli funded a dome and a 0.60-meter telescope, and in 1958, a spectrograph was installed. A 1.06-meter (42-inch) telescope was installed in 1963, funded by NASA and was used to take detailed photographs of the surface of the Moon in preparation for the Apollo missions. In 1965 the astronomers and Janine Connes were able to formulate a detailed analysis of the composition of the atmospheres on Mars and Venus, based on the infrared spectra gathered from these planets. The results showed atmospheres in chemical equilibrium. This served as a basis for James Lovelock, a scientist working for the Jet Propulsion Laboratory in California, to predict that those planets had no life—scientifically accepted years after. The 2-metre Bernard Lyot Telescope was placed at the observatory in 1980 on top of a 28-meter column built off to the side to avoid wind turbulence affecting the seeing of the other telescopes. It is the largest telescope in France. The observatory also has a coronagraph, which is used to study the solar corona. A 0.60-meter telescope (the Gentilly's T60 telescope) is also located at the top of Pic du Midi. Since 1982 this T60 is dedicated to amateur astronomy and managed by a group of amateurs, called association T60. The observatory consists of: The 0.55-meter telescope (Robley Dome); The 0.60-meter telescope (T60 Dome, welcoming amateur astronomers via the Association T60); The 1.06-meter telescope (Gentilli Dome) dedicated to observations of the solar system; The 2-meter telescope or Bernard Lyot Telescope (used with a new generation stellar spectropolarimeter); The coronagraph HACO-CLIMSO (studies of the solar corona); The bezel Jean Rösch (studies of the solar surface) The Charvin dome, which sheltered a photoelectric coronometer (which studied the Sun); The Baillaud dome, reassigned to the museum in 2000 and which houses a 1:1 scale model coronagraph. The observatory is located very close to the Greenwich meridian. Saturn's moon Helene (Saturn XII or Dione B), was discovered by French astronomers Pierre Laques and Jean Lecacheux in 1980 from ground-based observations at Pic du Midi, and named Helene in 1988. It is also a trojan moon of Dione. The main-belt asteroid 20488 Pic-du-Midi, discovered at Pises Observatory in 1999, was named for the observatory and the mountain it is located on. List of discovered minor planets The Minor Planet Center credits the discovery of the following minor planets directly to the observatory (as of 2017, no discoveries have been assigned to individual astronomers): International Dark Sky Reserve Officially initiated in 2009, during the international year of astronomy, the Pic du Midi International Dark Sky Reserve (IDSR) was labeled in 2013 by the International Dark-Sky Association. It's the sixth in the world, the first in Europe and the only one still today in France. The IDSR aims to limit the exponential propagation of light pollution, in order to preserve the quality of the night. Co-managed by the Syndicat mixte for the tourist promotion of the Pic du Midi, the Pyrénées National Park and the Departmental Energy Union 65, its priority actions are the public education on the impacts and consequences of these pollutions as well as the establishment of responsible lighting in the Haut-Pyrenean territory. It covers 3,000 km2, or 65% of the Hautes-Pyrénées. The IDSR includes 251 communes spread around the Pic du Midi de Bigorre and is distinguished in two zones: A core zone, devoid of any permanent lighting and witnessing an exceptional night quality; A buffer zone, in which the territory actors recognize the importance of the nocturnal environment and undertake to protect it. The IDSR initiated the program "Ciel Etoilé" (Starry sky), program of reconversion of the 40 000 luminous points of its territory, the program "Gardiens des Etoiles" (Guardians of the stars), program of metrological monitoring of the light pollution evolution, but also the program "Adap'Ter", project that will identify "trames sombres" (Dark frame: nocturnal biodiversity deplacements). Climate Pic du Midi de Bigorre has a mediterranean alpine climate with a polar temperature regime due to its high elevation. Due to the Gulf Stream moderation of the surrounding lowlands, temperature swings are in general quite low. This results in temperatures rarely exceeding even during lowland heat waves, and also temperatures beneath being extremely rare. The UV index is higher than in the surrounding lowlands due to the elevation. Snow cover is permanent during winter months, but melts for a few months each year. Seasonal lag is extreme during winter and spring, with February being the clearly coldest month, and May having mean temperatures below freezing. Among lowland climates, the station closely resembles Nuuk in Greenland for the temperature regime. See also List of astronomical observatories References External links Observatoire Midi-Pyrénées Profile of climb from Col du Tourmalet on www.climbbybike.com A night on the "Vaisseaux d'Etoiles" (Starship) du Pic du Midi - Photo gallery Histoire de l'observatoire du Pic du Midi (Observatory history) Video about the Pic du Midi, by Roger Servajean, on Paris Observatory digital library Astronomical observatories in France Pic du Midi Observatory Mountains of Hautes-Pyrénées Mountains of the Pyrenees Pic du Midi Observatory International Dark Sky Reserves
Pic du Midi de Bigorre
[ "Astronomy" ]
1,411
[ "International Dark Sky Reserves", "Dark-sky preserves" ]
569,480
https://en.wikipedia.org/wiki/Receptor%20%28biochemistry%29
In biochemistry and pharmacology, receptors are chemical structures, composed of protein, that receive and transduce signals that may be integrated into biological systems. These signals are typically chemical messengers which bind to a receptor and produce physiological responses such as change in the electrical activity of a cell. For example, GABA, an inhibitory neurotransmitter, inhibits electrical activity of neurons by binding to GABA receptors. There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. Relaying sends the signal onward, amplification increases the effect of a single ligand, and integration allows the signal to be incorporated into another biochemical pathway. Receptor proteins can be classified by their location. Cell surface receptors, also known as transmembrane receptors, include ligand-gated ion channels, G protein-coupled receptors, and enzyme-linked hormone receptors. Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors. A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule, such as a neurotransmitter, hormone, pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine, but it can also be activated by nicotine and blocked by curare. Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys. When a ligand binds to a corresponding receptor, it activates or inhibits the receptor's associated biochemical pathway, which may also be highly specialised. Receptor proteins can be also classified by the property of the ligands. Such classifications include chemoreceptors, mechanoreceptors, gravitropic receptors, photoreceptors, magnetoreceptors and gasoreceptors. Structure The structures of receptors are very diverse and include the following major categories, among others: Type 1: Ligand-gated ion channels (ionotropic receptors) – These receptors are typically the targets of fast neurotransmitters such as acetylcholine (nicotinic) and GABA; activation of these receptors results in changes in ion movement across a membrane. They have a heteromeric structure in that each subunit consists of the extracellular ligand-binding domain and a transmembrane domain which includes four transmembrane alpha helices. The ligand-binding cavities are located at the interface between the subunits. Type 2: G protein-coupled receptors (metabotropic receptors) – This is the largest family of receptors and includes the receptors for several hormones and slow transmitters e.g. dopamine, metabotropic glutamate. They are composed of seven transmembrane alpha helices. The loops connecting the alpha helices form extracellular and intracellular domains. The binding-site for larger peptide ligands is usually located in the extracellular domain whereas the binding site for smaller non-peptide ligands is often located between the seven alpha helices and one extracellular loop. The aforementioned receptors are coupled to different intracellular effector systems via G proteins. G proteins are heterotrimers made up of 3 subunits: α (alpha), β (beta), and γ (gamma). In the inactive state, the three subunits associate together and the α-subunit binds GDP. G protein activation causes a conformational change, which leads to the exchange of GDP for GTP. GTP-binding to the α-subunit causes dissociation of the β- and γ-subunits. Furthermore, the three subunits, α, β, and γ have additional four main classes based on their primary sequence. These include Gs, Gi, Gq and G12. Type 3: Kinase-linked and related receptors (see "Receptor tyrosine kinase" and "Enzyme-linked receptor") – They are composed of an extracellular domain containing the ligand binding site and an intracellular domain, often with enzymatic-function, linked by a single transmembrane alpha helix. The insulin receptor is an example. Type 4: Nuclear receptors – While they are called nuclear receptors, they are actually located in the cytoplasm and migrate to the nucleus after binding with their ligands. They are composed of a C-terminal ligand-binding region, a core DNA-binding domain (DBD) and an N-terminal domain that contains the AF1(activation function 1) region. The core region has two zinc fingers that are responsible for recognizing the DNA sequences specific to this receptor. The N terminus interacts with other cellular transcription factors in a ligand-independent manner; and, depending on these interactions, it can modify the binding/activity of the receptor. Steroid and thyroid-hormone receptors are examples of such receptors. Membrane receptors may be isolated from cell membranes by complex extraction procedures using solvents, detergents, and/or affinity purification. The structures and actions of receptors may be studied by using biophysical methods such as X-ray crystallography, NMR, circular dichroism, and dual polarisation interferometry. Computer simulations of the dynamic behavior of receptors have been used to gain understanding of their mechanisms of action. Binding and activation Ligand binding is an equilibrium process. Ligands bind to receptors and dissociate from them according to the law of mass action in the following equation, for a ligand L and receptor, R. The brackets around chemical species denote their concentrations. One measure of how well a molecule fits a receptor is its binding affinity, which is inversely related to the dissociation constant Kd. A good fit corresponds with high affinity and low Kd. The final biological response (e.g. second messenger cascade, muscle-contraction), is only achieved after a significant number of receptors are activated. Affinity is a measure of the tendency of a ligand to bind to its receptor. Efficacy is the measure of the bound ligand to activate its receptor. Agonists versus antagonists Not every ligand that binds to a receptor also activates that receptor. The following classes of ligands exist: (Full) agonists are able to activate the receptor and result in a strong biological response. The natural endogenous ligand with the greatest efficacy for a given receptor is by definition a full agonist (100% efficacy). Partial agonists do not activate receptors with maximal efficacy, even with maximal binding, causing partial responses compared to those of full agonists (efficacy between 0 and 100%). Antagonists bind to receptors but do not activate them. This results in a receptor blockade, inhibiting the binding of agonists and inverse agonists. Receptor antagonists can be competitive (or reversible), and compete with the agonist for the receptor, or they can be irreversible antagonists that form covalent bonds (or extremely high affinity non-covalent bonds) with the receptor and completely block it. The proton pump inhibitor omeprazole is an example of an irreversible antagonist. The effects of irreversible antagonism can only be reversed by synthesis of new receptors. Inverse agonists reduce the activity of receptors by inhibiting their constitutive activity (negative efficacy). Allosteric modulators: They do not bind to the agonist-binding site of the receptor but instead on specific allosteric binding sites, through which they modify the effect of the agonist. For example, benzodiazepines (BZDs) bind to the BZD site on the GABAA receptor and potentiate the effect of endogenous GABA. Note that the idea of receptor agonism and antagonism only refers to the interaction between receptors and ligands and not to their biological effects. Constitutive activity A receptor which is capable of producing a biological response in the absence of a bound ligand is said to display "constitutive activity". The constitutive activity of a receptor may be blocked by an inverse agonist. The anti-obesity drugs rimonabant and taranabant are inverse agonists at the cannabinoid CB1 receptor and though they produced significant weight loss, both were withdrawn owing to a high incidence of depression and anxiety, which are believed to relate to the inhibition of the constitutive activity of the cannabinoid receptor. The GABAA receptor has constitutive activity and conducts some basal current in the absence of an agonist. This allows beta carboline to act as an inverse agonist and reduce the current below basal levels. Mutations in receptors that result in increased constitutive activity underlie some inherited diseases, such as precocious puberty (due to mutations in luteinizing hormone receptors) and hyperthyroidism (due to mutations in thyroid-stimulating hormone receptors). Theories of drug-receptor interaction Occupation Early forms of the receptor theory of pharmacology stated that a drug's effect is directly proportional to the number of receptors that are occupied. Furthermore, a drug effect ceases as a drug-receptor complex dissociates. Ariëns & Stephenson introduced the terms "affinity" & "efficacy" to describe the action of ligands bound to receptors. Affinity: The ability of a drug to combine with a receptor to create a drug-receptor complex. Efficacy: The ability of drug to initiate a response after the formation of drug-receptor complex. Rate In contrast to the accepted Occupation Theory, Rate Theory proposes that the activation of receptors is directly proportional to the total number of encounters of a drug with its receptors per unit time. Pharmacological activity is directly proportional to the rates of dissociation and association, not the number of receptors occupied: Agonist: A drug with a fast association and a fast dissociation. Partial-agonist: A drug with an intermediate association and an intermediate dissociation. Antagonist: A drug with a fast association & slow dissociation Induced-fit As a drug approaches a receptor, the receptor alters the conformation of its binding site to produce drug—receptor complex. Spare Receptors In some receptor systems (e.g. acetylcholine at the neuromuscular junction in smooth muscle), agonists are able to elicit maximal response at very low levels of receptor occupancy (<1%). Thus, that system has spare receptors or a receptor reserve. This arrangement produces an economy of neurotransmitter production and release. Receptor regulation Cells can increase (upregulate) or decrease (downregulate) the number of receptors to a given hormone or neurotransmitter to alter their sensitivity to different molecules. This is a locally acting feedback mechanism. Change in the receptor conformation such that binding of the agonist does not activate the receptor. This is seen with ion channel receptors. Uncoupling of the receptor effector molecules is seen with G protein-coupled receptors. Receptor sequestration (internalization), e.g. in the case of hormone receptors. Examples and ligands The ligands for receptors are as diverse as their receptors. GPCRs (7TMs) are a particularly vast family, with at least 810 members. There are also LGICs for at least a dozen endogenous ligands, and many more receptors possible through different subunit compositions. Some common examples of ligands and receptors include: Ion channels and G protein coupled receptors Some example ionotropic (LGIC) and metabotropic (specifically, GPCRs) receptors are shown in the table below. The chief neurotransmitters are glutamate and GABA; other neurotransmitters are neuromodulatory. This list is by no means exhaustive. Enzyme linked receptors Enzyme linked receptors include Receptor tyrosine kinases (RTKs), serine/threonine-specific protein kinase, as in bone morphogenetic protein and guanylate cyclase, as in atrial natriuretic factor receptor. Of the RTKs, 20 classes have been identified, with 58 different RTKs as members. Some examples are shown below: Intracellular Receptors Receptors may be classed based on their mechanism or on their position in the cell. 4 examples of intracellular LGIC are shown below: Role in health and disease In genetic disorders Many genetic disorders involve hereditary defects in receptor genes. Often, it is hard to determine whether the receptor is nonfunctional or the hormone is produced at decreased level; this gives rise to the "pseudo-hypo-" group of endocrine disorders, where there appears to be a decreased hormonal level while in fact it is the receptor that is not responding sufficiently to the hormone. In the immune system The main receptors in the immune system are pattern recognition receptors (PRRs), toll-like receptors (TLRs), killer activated and killer inhibitor receptors (KARs and KIRs), complement receptors, Fc receptors, B cell receptors and T cell receptors. See also Ki Database Ion channel linked receptors Neuropsychopharmacology Schild regression for ligand receptor inhibition Signal transduction Stem cell marker List of MeSH codes (D12.776) Receptor theory Notes References External links IUPHAR GPCR Database and Ion Channels Compendium Human plasma membrane receptome Cell biology Cell signaling Membrane biology
Receptor (biochemistry)
[ "Chemistry", "Biology" ]
2,857
[ "Cell biology", "Membrane biology", "Signal transduction", "Receptors", "Molecular biology" ]
2,151,656
https://en.wikipedia.org/wiki/Wave%20tank
A wave tank is a laboratory setup for observing the behavior of surface waves. The typical wave tank is a box filled with liquid, usually water, leaving open or air-filled space on top. At one end of the tank, an actuator generates waves; the other end usually has a wave-absorbing surface. A similar device is the ripple tank, which is flat and shallow and used for observing patterns of surface waves from above. Wave basin A wave basin is a wave tank which has a width and length of comparable magnitude, often used for testing ships, offshore structures and three-dimensional models of harbors (and their breakwaters). Wave flume A wave flume (or wave channel) is a special sort of wave tank: the width of the flume is much less than its length. The generated waves are therefore – more or less – two-dimensional in a vertical plane (2DV), meaning that the orbital flow velocity component in the direction perpendicular to the flume side wall is much smaller than the other two components of the three-dimensional velocity vector. This makes a wave flume a well-suited facility to study near-2DV structures, like cross-sections of a breakwater. Also (3D) constructions providing little blockage to the flow may be tested, e.g. measuring wave forces on vertical cylinders with a diameter much less than the flume width. Wave flumes may be used to study the effects of water waves on coastal structures, offshore structures, sediment transport and other transport phenomena. The waves are most often generated with a mechanical wavemaker, although there are also wind–wave flumes with (additional) wave generation by an air flow over the water – with the flume closed above by a roof above the free surface. The wavemaker frequently consists of a translating or rotating rigid wave board. Modern wavemakers are computer controlled, and can generate besides periodic waves also random waves, solitary waves, wave groups or even tsunami-like wave motion. The wavemaker is at one end of the wave flume, and at the other end is the construction being tested, or a wave absorber (a beach or special wave absorbing constructions). Often, the side walls contain glass windows, or are completely made of glass, allowing for a clear visual observation of the experiment, and the easy deployment of optical instruments (e.g. by Laser Doppler velocimetry or particle image velocimetry). Circular wave basin In 2014, the first circular, combined current and wave test basin, FloWaveTT, was commissioned in The University of Edinburgh. This allows for "true" 360° waves to be generated to simulate rough storm conditions as well as scientific controlled waves in the same facility. See also Water tunnel (hydrodynamic) Airy wave theory Ocean waves Ripple tank Shallow water equations Further reading References External links Experimental physics Hydrodynamics Water waves Scale modeling Physical models Articles containing video clips
Wave tank
[ "Physics", "Chemistry" ]
600
[ "Scale modeling", "Physical phenomena", "Water waves", "Hydrodynamics", "Waves", "Experimental physics", "Physical objects", "Physical models", "Matter", "Fluid dynamics" ]
2,151,949
https://en.wikipedia.org/wiki/Mott%20problem
The Mott problem is an iconic challenge to quantum mechanics theory: how can the prediction of spherically symmetric wave function result in linear tracks seen in a cloud chamber. The problem was first formulated in 1927 by Albert Einstein and Max Born and solved in 1929 by Nevill Francis Mott. Mott's solution notably only uses the wave equation, not wavefunction collapse, and it is considered the earliest example of what is now called decoherence theory. Spherical waves, particle tracks The problem later associated with Mott concerns a spherical wave function associated with an alpha ray emitted from the decay of a radioactive atomic nucleus. Intuitively, one might think that such a wave function should randomly ionize atoms throughout the cloud chamber, but this is not the case. The result of such a decay is always observed as linear tracks seen in Wilson's cloud chamber. The origin of the tracks given the original spherical wave predicted by theory is the problem requiring physical explanation. In practice, virtually all high energy physics experiments, such as those conducted at particle colliders, involve wave functions which are inherently spherical. Yet, when the results of a particle collision are detected, they are invariably in the form of linear tracks (see, for example, the illustrations accompanying the article on bubble chambers). It is somewhat strange to think that a spherically symmetric wave function should be observed as a straight track, and yet, this occurs on a daily basis in all particle collider experiments. History The problem of alpha particle track was discussed at the Fifth Solvay conference in 1927. Max Born described the problem as one that Albert Einstein pointed to, asking "how can the corpuscular character of the phenomenon be reconciled here with the representation by waves?". Born answers with Heisenberg's "reduction of the probability packet", now called wavefunction collapse, introduced in May 1927. Born says each droplet in the cloud chamber track corresponds to a reduction of the wave in the immediate vicinity of the droplet. At the suggestion of Wolfgang Pauli he also discusses a solution that includes the alpha emitter and two atoms all in the same state and without wave function collapse, but does not pursue the idea beyond a brief discussion. In his highly influential 1930 book, Werner Heisenberg analyzed the problem qualitatively but in detail. He considers two cases: wavefunction collapse at each interaction or wavefunction collapse only at the final apparatus, concluding they are equivalent. In 1929 Charles Galton Darwin analyzed the problem without using wavefunction collapse. He says the correct approach requires viewing the wavefunction as consisting of the system under study (the alpha particle) and the environment it interacts with (atoms of the cloud chamber). Starting with a simple spherical wave, each collision involves a wavefunction with more coordinates and increasing complexity. His model coincides with the strategy of modern quantum decoherence theory. Mott's analysis Nevill Mott picks up where Darwin left off, citing Darwin's paper explicitly. Mott's goal is to calculate the probability of exciting multiple atoms in the cloud chamber to understand why the excitation with a spherical wave creates a linear track. Mott starts with a spherical wave for the alpha particle and two representative cloud chamber atoms modeled as hydrogen atoms. The relative positions of the emitter (black dot in the diagram, taken as the origin in Mott's treatment) and the two atoms (orange dots at and ) are fixed during the calculation of the track, meaning the velocity of the alpha particle is taken as much larger than the thermal motion of the gas atoms. These relative coordinates are parameters in the solution so the intensity of the excitations for various positions can be compared. The hydrogen atoms stand in for whatever might compose the cloud chamber gas. Given the fixed positions of the atoms, Mott calculates the excitation of the electrons of those atoms. By assuming that the emitter and the hydrogen atoms are not close together, Mott represents the time-independent part of the three-body state of the system, , as a sum of products of hydrogen atom eigenfunctions : Here is the position of the alpha particle, the positions of the hydrogen atoms' electrons, and the sum runs over the excited states of the atoms I and II. The expansion factors have the physical interpretation of conditional probability for the alpha particle near , given that atom I is excited to state and atom II is excited to state . To solve for the expansion factors, Mott used the Born approximation, a form of perturbation theory for scattering that works well when the incident wave is not significantly altered by the scattering. Consequently, Mott is assuming that the alpha particle barely notices the atoms it excites as it races through the cloud chamber. Mott analyzes the spatial properties of the factor which describes the scattered alpha-particle wave when the first atom is excited and the second is in its ground state. He shows that it is strongly peaked along the line from the emitter to the first atom (along in the diagram). Mott then shows that the probability that both atoms become excited depends on the product of the probability that one atom is excited and the spatial extent of the electron potential of the other atom. Both atoms are excited only for colinear configurations. Mott demonstrated that by considering the interaction in configuration space, where all of the atoms of the cloud chamber play a role, it is overwhelmingly probable that all of the condensed droplets in the cloud chamber will lie close to the same straight line. In his work on quantum measurement, Eugene Wigner cites Mott's insight on configuration space as a critical aspect of quantum mechanics: the configuration space approach allows spatial correlations like the line of atoms into the structure of quantum mechanics. What is uncertain is which straight line the wave packet will reduce to; the probability distribution of straight tracks is spherically symmetric. Modern applications Erich Joos and H. Dieter Zeh adopt Mott's model in the first concrete model of quantum decoherence theory. Mott's analysis, while it predates modern decoherence theory, fits squarely within its approach. Bryce DeWitt points to the dramatic mass difference between the alpha particle and the electrons in Mott's analysis as characteristic of decoherence of the state of the more massive system, the alpha particle. In modern times, the Mott problem is occasionally considered theoretically in the context of astrophysics and cosmology, where the evolution of the wave function from the Big Bang or other astrophysical phenomena is considered. See also References Quantum measurement
Mott problem
[ "Physics" ]
1,334
[ "Quantum measurement", "Quantum mechanics" ]
2,152,181
https://en.wikipedia.org/wiki/List%20of%20chemical%20elements
118 chemical elements have been identified and named officially by IUPAC. A chemical element, often simply called an element, is a type of atom which has a specific number of protons in its atomic nucleus (i.e., a specific atomic number, or Z). The definitive visualisation of all 118 elements is the periodic table of the elements, whose history along the principles of the periodic law was one of the founding developments of modern chemistry. It is a tabular arrangement of the elements by their chemical properties that usually uses abbreviated chemical symbols in place of full element names, but the linear list format presented here is also useful. Like the periodic table, the list below organizes the elements by the number of protons in their atoms; it can also be organized by other properties, such as atomic weight, density, and electronegativity. For more detailed information about the origins of element names, see List of chemical element name etymologies. List See also List of people whose names are used in chemical element names List of places used in the names of chemical elements List of chemical element name etymologies Roles of chemical elements Extended periodic table Theories about undiscovered elements References External links Atoms made thinkable, an interactive visualisation of the elements allowing physical and chemical properties of the elements to be compared
List of chemical elements
[ "Chemistry" ]
266
[ "Lists of chemical elements" ]
2,154,371
https://en.wikipedia.org/wiki/Extreme%20ultraviolet%20lithography
Extreme ultraviolet lithography (EUVL, also known simply as EUV) is a technology used in the semiconductor industry for manufacturing integrated circuits (ICs). It is a type of photolithography that uses 13.5 nm extreme ultraviolet (EUV) light from a laser-pulsed tin (Sn) plasma to create intricate patterns on semiconductor substrates. , ASML Holding is the only company that produces and sells EUV systems for chip production, targeting 5 nanometer (nm) and 3 nm process nodes. The EUV wavelengths that are used in EUVL are near 13.5 nanometers (nm), using a laser-pulsed tin (Sn) droplet plasma to produce a pattern by using a reflective photomask to expose a substrate covered by photoresist. Tin ions in the ionic states from Sn IX to Sn XIV give photon emission spectral peaks around 13.5 nm from 4p64dn – 4p54dn+1 + 4dn−14f ionic state transitions. History and economic impact In the 1960s, visible light was used for the production of integrated circuits, with wavelengths as small as 435 nm (mercury "g line"). Later, ultraviolet (UV) light was used, at first with a wavelength of 365 nm (mercury "i line"), then with excimer wavelengths, first of 248 nm (krypton fluoride laser), then 193 nm (argon fluoride laser), which was called deep UV. The next step, going even smaller, was called extreme UV, or EUV. The EUV technology was considered impossible by many. EUV light is absorbed by glass and air, so instead of using lenses to focus the beams of light as done previously, mirrors in vacuum would be needed. A reliable production of EUV was also problematic. Then, leading producers of steppers Canon and Nikon stopped development, and some predicted the end of Moore's law. In 1991, scientists at Bell Labs published a paper demonstrating the possibility of using a wavelength of 13.8 nm for the so-called soft X-ray projection lithography. To address the challenge of EUV lithography, researchers at Lawrence Livermore National Laboratory, Lawrence Berkeley National Laboratory, and Sandia National Laboratories were funded in the 1990s to perform basic research into the technical obstacles. The results of this successful effort were disseminated via a public/private partnership Cooperative R&D Agreement (CRADA) with the invention and rights wholly owned by the US government, but licensed and distributed under approval by DOE and Congress. The CRADA consisted of a consortium of private companies and the Labs, manifested as an entity called the Extreme Ultraviolet Limited Liability Company (EUV LLC). Intel, Canon, and Nikon (leaders in the field at the time), as well as the Dutch company ASML and Silicon Valley Group (SVG) all sought licensing. Congress denied the Japanese companies the necessary permission, as they were perceived as strong technical competitors at the time and should not benefit from taxpayer-funded research at the expense of American companies. In 2001 SVG was acquired by ASML, leaving ASML as the sole benefactor of the critical technology. By 2018, ASML succeeded in deploying the intellectual property from the EUV-LLC after several decades of developmental research, with incorporation of European-funded EUCLIDES (Extreme UV Concept Lithography Development System) and long-standing partner German optics manufacturer ZEISS and synchrotron light source supplier Oxford Instruments. This led MIT Technology Review to name it "the machine that saved Moore's law". The first prototype in 2006 produced one wafer in 23 hours. As of 2022, a scanner produces up to 200 wafers per hour. The scanner uses Zeiss optics, which that company calls "the most precise mirrors in the world", produced by locating imperfections and then knocking off individual molecules with techniques such as ion beam figuring. This made the once small company ASML the world leader in the production of scanners and monopolist in this cutting-edge technology and resulted in a record turnover of 27.4 billion euros in 2021, dwarfing their competitors Canon and Nikon, who were denied IP access. Because it is such a key technology for development in many fields, the United States licenser pressured Dutch authorities to not sell these machines to China. ASML has followed the guidelines of Dutch export controls and until further notice will have no authority to ship the machines to China. Along with multiple patterning, EUV has paved the way for higher transistor densities, allowing the production of higher-performance processors. Smaller transistors also require less power to operate, resulting in more energy-efficient electronics. Market growth projection According to a report by Pragma Market Research, the global extreme ultraviolet (EUV) lithography market is projected to grow from US$8,957.8 million in 2024 to US$17,350 million by 2030, at a compound annual growth rate (CAGR) of 11.7%. This significant growth reflects the rising demand for miniaturized electronics in various sectors, including smartphones, artificial intelligence, and high-performance computing. Fab tool output Requirements for EUV steppers, given the number of layers in the design that require EUV, the number of machines, and the desired throughput of the fab, assuming 24 hours per day operation. Masks EUV photomasks work by reflecting light, which is achieved by using multiple alternating layers of molybdenum and silicon. This is in contrast to conventional photomasks which work by blocking light using a single chromium layer on a quartz substrate. An EUV mask consists of 40–50 alternating silicon and molybdenum layers; this is a multilayer which acts to reflect the extreme ultraviolet light through Bragg diffraction; the reflectance is a strong function of incident angle and wavelength, with longer wavelengths reflecting more near normal incidence and shorter wavelengths reflecting more away from normal incidence. The multilayer may be protected by a thin ruthenium layer, called a capping layer. The pattern is defined in a tantalum-based absorbing layer over the capping layer. Blank photomasks are mainly made by two companies: AGC Inc. and Hoya Corporation. Ion-beam deposition equipment mainly made by Veeco is often used to deposit the multilayer. A blank photomask is covered with photoresist, which is then baked (solidified) in an oven, and later the pattern is defined on the photoresist using maskless lithography with an electron beam. This step is called exposure. The exposed photoresist is developed (removed), and the unprotected areas are etched. The remaining photoresist is then removed. Masks are then inspected and later repaired using an electron beam. Etching must be done only in the absorbing layer and thus there is a need to distinguish between the capping and the absorbing layer, which is known as etch selectivity and is unlike etching in conventional photomasks, which only have one layer critical to their function. Tool An EUV tool (EUV photolithography machine) has a laser-driven tin (Sn) plasma light source, reflective optics comprising multilayer mirrors, contained within a hydrogen gas ambient. The hydrogen is used to keep the EUV collector mirror, as the first mirror collecting EUV emitted over a large range in angle (~2π sr) from the Sn plasma, in the source free of Sn deposition. Specifically, the hydrogen buffer gas in the EUV source chamber or vessel decelerates or possibly pushes back Sn ions and Sn debris traveling toward the EUV collector (collector protection) and enable a chemical reaction of Sn(s) + 4H(g) -> SnH4(g) to remove Sn deposition on the collector in the form of SnH4 gas (collector reflectivity restoration). EUVL is a significant departure from the deep-ultraviolet lithography standard. All matter absorbs EUV radiation. Hence, EUV lithography requires vacuum. All optical elements, including the photomask, must use defect-free molybdenum/silicon (Mo/Si) multilayers (consisting of 50 Mo/Si bilayers, which theoretical reflectivity limit at 13.5 nm is ~75%) that act to reflect light by means of interlayer wave interference; any one of these mirrors absorb around 30% of the incident light, so the mirror temperature control is important. EUVL systems, as of 2002-2009, contain at least two condenser multilayer mirrors, six projection multilayer mirrors and a multilayer object (mask). Since the mirrors absorb 96% of the EUV light, the ideal EUV source needs to be much brighter than its predecessors. EUV source development has focused on plasmas generated by laser or discharge pulses. The mirror responsible for collecting the light is directly exposed to the plasma and is vulnerable to damage from high-energy ions and other debris such as tin droplets, which require the costly collector mirror to be replaced every year. Resource requirements The required utility resources are significantly larger for EUV compared to 193 nm immersion, even with two exposures using the latter. At the 2009 EUV Symposium, Hynix reported that the wall plug efficiency was ~0.02% for EUV, i.e., to get 200 watts at intermediate focus for 100 wafers per hour, one would require 1 megawatt of input power, compared to 165 kilowatts for an ArF immersion scanner, and that even at the same throughput, the footprint of the EUV scanner was ~3× the footprint of an ArF immersion scanner, resulting in productivity loss. Additionally, to confine ion debris, a superconducting magnet may be required. A typical EUV tool weighs nearly 200 tons and costs around 180 million USD. EUV tools consume at least 10× more energy than immersion tools. Summary of key features The following table summarizes key differences between EUV systems in development and ArF immersion systems which are widely used in production today: The different degrees of resolution among the 0.33 NA tools are due to the different illumination options. Despite the potential of the optics to reach sub-20 nm resolution, secondary electrons in resist practically limit the resolution to around 20 nm (more on this below). Light source power, throughput, and uptime Neutral atoms or condensed matter cannot emit EUV radiation. Ionization must precede EUV emission in matter. The thermal production of multicharged positive ions is only possible in a hot dense plasma, which itself strongly absorbs EUV. As of 2025, the established EUV light source is a laser-pulsed tin plasma. The ions absorb the EUV light they emit and are easily neutralized by electrons in the plasma to lower charge states, which produce light mainly at other, unusable wavelengths, resulting in a much reduced efficiency of light generation for lithography at higher plasma power density. The throughput is tied to the source power, divided by the dose. A higher dose requires a slower stage motion (lower throughput) if pulse power cannot be increased. EUV collector reflectivity degrades ~0.1–0.3% per billion 50 kHz pulses (~10% in ~2 weeks), leading to loss of uptime and throughput, while even for the first few billion pulses (within one day), there is still 20% (±10%) fluctuation. This could be due to the accumulating Sn residue mentioned above which is not completely cleaned off. On the other hand, conventional immersion lithography tools for double-patterning provide consistent output for up to a year. Recently, the NXE:3400B illuminator features a smaller pupil fill ratio (PFR) down to 20% without transmission loss. PFR is maximized and greater than 0.2 around a metal pitch of 45 nm. Due to the use of EUV mirrors which also absorb EUV light, only a small fraction of the source light is finally available at the wafer. There are 4 mirrors used for the illumination optics and 6 mirrors for the projection optics. The EUV mask or reticle is itself an additional mirror. With 11 reflections, only ~2% of the EUV source light is available at the wafer. The throughput is determined by the EUV resist dose, which in turn depends on the required resolution. A dose of 40 mJ/cm2 is expected to be maintained for adequate throughput. Tool uptime The EUV light source limits tool uptime besides throughput. In a two-week period, for example, over seven hours downtime may be scheduled, while total actual downtime including unscheduled issues could easily exceed a day. A dose error over 2% warrants tool downtime. The wafer exposure throughput steadily expanded up to around 1000 wafers per day (per system) over the 2019–2022 period, indicating substantial idle time, while at the same time running >120 wafers per day on a number of multipatterned EUV layers, for an EUV wafer on average. Comparison to other lithography light sources EUV (10–121 nm) is the band longer than X-rays (0.1–10 nm) and shorter than the hydrogen Lyman-alpha line. While state-of-the-art 193 nm ArF excimer lasers offer intensities of 200 W/cm2, lasers for producing EUV-generating plasmas need to be much more intense, on the order of 1011 W/cm2. A state-of-the-art ArF immersion lithography 120 W light source requires no more than 40 kW electrical power, while EUV sources are targeted to exceed 40 kW. The optical power target for EUV lithography is at least 250 W, while for other conventional lithography sources, it is much less. For example, immersion lithography light sources target 90 W, dry ArF sources 45 W, and KrF sources 40 W. High-NA EUV sources are expected to require at least 500 W. EUV-specific optical issues Reflective optics A fundamental aspect of EUVL tools, resulting from the use of reflective optics, is the off-axis illumination (at an angle of 6°, in different direction at different positions within the illumination slit) on a multilayer mask (reticle). This leads to shadowing effects resulting in asymmetry in the diffraction pattern that degrade pattern fidelity in various ways as described below. For example, one side (behind the shadow) would appear brighter than the other (within the shadow). The behavior of light rays within the plane of reflection (affecting horizontal lines) is different from the behavior of light rays out of the plane of reflection (affecting vertical lines). Most conspicuously, identically sized horizontal and vertical lines on the EUV mask are printed at different sizes on the wafer. The combination of the off-axis asymmetry and the mask shadowing effect leads to a fundamental inability of two identical features even in close proximity to be in focus simultaneously. One of EUVL's key issues is the asymmetry between the top and bottom line of a pair of horizontal lines (the so-called "two-bar"). Some ways to partly compensate are the use of assist features as well as asymmetric illumination. An extension of the two-bar case to a grating consisting of many horizontal lines shows similar sensitivity to defocus. It is manifest in the critical dimension (CD) difference between the top and bottom edge lines of the set of 11 horizontal lines. Polarization by reflection also leads to partial polarization of EUV light, which favors imaging of lines perpendicular to the plane of the reflections. Pattern shift from defocus (non-telecentricity) The EUV mask absorber, due to partial transmission, generates a phase difference between the 0th and 1st diffraction orders of a line-space pattern, resulting in image shifts (at a given illumination angle) as well as changes in peak intensity (leading to linewidth changes) which are further enhanced due to defocus. Ultimately, this results in different positions of best focus for different pitches and different illumination angles. Generally, the image shift is balanced out due to illumination source points being paired (each on opposite sides of the optical axis). However, the separate images are superposed and the resulting image contrast is degraded when the individual source image shifts are large enough. The phase difference ultimately also determines the best focus position. The multilayer is also responsible for image shifting due to phase shifts from diffracted light within the multilayer itself. This is inevitable due to light passing twice through the mask pattern. The use of reflection causes wafer exposure position to be extremely sensitive to the reticle flatness and the reticle clamp. Reticle clamp cleanliness is therefore required to be maintained. Small (milliradian-scale) deviations in mask flatness in the local slope, coupled with wafer defocus. More significantly, mask defocus has been found to result in large overlay errors. In particular, for a 10 nm node metal 1 layer (including 48 nm, 64 nm, 70 nm pitches, isolated, and power lines), the uncorrectable pattern placement error was 1 nm for 40 nm mask z-position shift. This is a global pattern shift of the layer with respect to previously defined layers. However, features at different locations will also shift differently due to different local deviations from mask flatness, e.g., from defects buried under the multilayer. It can be estimated that the contribution of mask non-flatness to overlay error is roughly 1/40 times the peak-to-valley thickness variation. With the blank peak-to-valley spec of 50 nm, ~1.25 nm image placement error is possible. Blank thickness variations up to 80 nm also contribute, which lead to up to 2 nm image shift. The off-axis illumination of the reticle is also the cause of non-telecentricity in wafer defocus, which consumes most of the 1.4 nm overlay budget of the NXE:3400 EUV scanner even for design rules as loose as 100 nm pitch. The worst uncorrectable pattern placement error for a 24 nm line was about 1.1 nm, relative to an adjacent 72 nm power line, per 80 nm wafer focus position shift at a single slit position; when across-slit performance is included, the worst error is over 1.5 nm in the wafer defocus window In 2017, an actinic microscope mimicking a 0.33 NA EUV lithography system with 0.2/0.9 quasar 45 illumination showed that an 80 nm pitch contact array shifted −0.6 to 1.0 nm while a 56 nm pitch contact array shifted −1.7 to 1.0 nm relative to a horizontal reference line, within a ±50 nm defocus window. Wafer defocus also leads to image placement errors due to deviations from local mask flatness. If the local slope is indicated by an angle α, the image is projected to be shifted in a 4× projection tool by , where DOF is the depth of focus. For a depth of focus of 100 nm, a small local deviation from flatness of 2.5 mrad (0.14°) can lead to a pattern shift of 1 nm. Simulations as well as experiments have shown that pupil imbalances in EUV lithography can result in pitch-dependent pattern placement errors. Since the pupil imbalance changes with EUV collector mirror aging or contamination, such placement errors may not be stable over time. The situation is specifically challenging for logic devices, where multiple pitches have critical requirements at the same time. The issue is ideally addressed by multiple exposures with tailored illuminations. Slit position dependence The direction of illumination is also highly dependent on slit position, essentially rotated azimuthally. Nanya Technology and Synopsys found that horizontal vs. vertical bias changed across slit with dipole illumination. The rotating plane of incidence (azimuthal range within −25° to 25°) is confirmed in the SHARP actinic review microscope at CXRO which mimics the optics for EUV projection lithography systems. The reason for this is a mirror is used to transform straight rectangular fields into arc-shaped fields. In order to preserve a fixed plane of incidence, the reflection from the previous mirror would be from a different angle with the surface for a different slit position; this causes non-uniformity of reflectivity. To preserve uniformity, rotational symmetry with a rotating plane of incidence is used. More generally, so-called "ring-field" systems reduce aberrations by relying on the rotational symmetry of an arc-shaped field derived from an off-axis annulus. This is preferred, as reflective systems must use off-axis paths, which aggravate aberrations. Hence identical die patterns within different halves of the arc-shaped slit would require different OPC. This renders them uninspectable by die-to-die comparison, as they are no longer truly identical dies. For pitches requiring dipole, quadrupole, or hexapole illumination, the rotation also causes mismatch with the same pattern layout at a different slit position, i.e., edge vs. center. Even with annular or circular illumination, the rotational symmetry is destroyed by the angle-dependent multilayer reflectance described above. Although the azimuthal angle range is about ±20° (field data indicated over 18°) on 0.33 NA scanners, at 7 nm design rules (36–40 nm pitch), the tolerance for illumination can be ±15°, or even less. Annular illumination nonuniformity and asymmetry also significantly impact the imaging. Newer systems have azimuthal angle ranges going up to ±30°. On 0.33 NA systems, 30 nm pitch and lower already suffer sufficient reduction of pupil fill to significantly affect throughput. The larger incident angle for pitch-dependent dipole illumination trend across slit does not affect horizontal line shadowing so much, but vertical line shadowing does increase going from center to edge. In addition, higher-NA systems may offer limited relief from shadowing, as they target tighet pitches. The slit position dependence is particularly difficult for the tilted patterns encountered in DRAM. Besides the more complicated effects due to shadowing and pupil rotation, tilted edges are converted to stair shape, which may be distorted by OPC. In fact, the 32 nm pitch DRAM by EUV will lengthen up to at least 9F2 cell area, where F is the active area half-pitch (traditionally, it had been 6F2). With a 2-D self-aligned double-patterning active area cut, the cell area is still lower at 8.9F2. Aberrations, originating from deviations of optical surfaces from subatomic (<0.1 nm) specifications as well as thermal deformations and possibly including polarized reflectance effects, are also dependent on slit position, as will be further discussed below, with regard to source-mask optimization (SMO). The thermally induced aberrations are expected to exhibit differences among different positions across the slit, corresponding to different field positions, as each position encounters different parts of the deformed mirrors. Ironically, the use of substrate materials with high thermal and mechanical stability make it more difficult to compensate wavefront errors. In combination with the range of wavelengths, the rotated plane of incidence aggravates the already severe stochastic impact on EUV imaging. Wavelength bandwidth (chromatic aberration) Unlike deep ultraviolet (DUV) lithography sources, based on excimer lasers, EUV plasma sources produce light across a broad range of wavelengths roughly spanning a 2% FWHM bandwidth near 13.5 nm (13.36nm – 13.65nm at 50% power). EUV (10–121nm) is the band longer than X-Rays (0.1–10nm) and shorter than the hydrogen Lyman-alpha line. Though the EUV spectrum is not completely monochromatic, nor even as spectrally pure as DUV laser sources, the working wavelength has generally been taken to be 13.5 nm. In actuality, the reflected power is distributed mostly in the 13.3-13.7 nm range. The bandwidth of EUV light reflected by a multilayer mirror used for EUV lithography is over +/-2% (>270 pm); the phase changes due to wavelength changes at a given illumination angle may be calculated and compared to the aberration budget. Wavelength dependence of reflectance also affects the apodization, or illumination distribution across the pupil (for different angles); different wavelengths effectively 'see' different illuminations as they are reflected differently by the multilayer of the mask. This effective source illumination tilt can lead to large image shifts due to defocus. Conversely, the peak reflected wavelength varies across the pupil due to different incident angles. This is aggravated when the angles span a wide radius, e.g., annular illumination. The peak reflectance wavelength increases for smaller incident angles. Aperiodic multilayers have been proposed to reduce the sensitivity at the cost of lower reflectivity but are too sensitive to random fluctuations of layer thicknesses, such as from thickness control imprecision or interdiffusion. A narrower bandwidth would increase sensitivity to mask absorber and buffer thickness on the 1 nm scale. Flare Flare is the presence of background light originating from scattering off of surface features which are not resolved by the light. In EUV systems, this light can be EUV or out-of-band (OoB) light that is also produced by the EUV source. The OoB light adds the complication of affecting the resist exposure in ways other than accounted for by the EUV exposure. OoB light exposure may be alleviated by a layer coated above the resist, as well as 'black border' features on the EUV mask. However, the layer coating inevitably absorbs EUV light, and the black border adds EUV mask processing cost. Line tip effects A key challenge for EUV is the counter-scaling behavior of the line tip-to-tip (T2T) distance as half-pitch (hp) is scaled down. This is in part due to lower image contrast for the binary masks used in EUV lithography, which is not encountered with the use of phase shift masks in immersion lithography. The rounding of the corners of the line end leads to line end shortening, and this is worse for binary masks. The use of phase-shift masks in EUV lithography has been studied but encounters difficulties from phase control in thin layers as well as the bandwidth of the EUV light itself. More conventionally, optical proximity correction (OPC) is used to address the corner rounding and line-end shortening. In spite of this, it has been shown that the tip-to-tip resolution and the line tip printability are traded off against each other, being effectively CDs of opposite polarity. In unidirectional metal layers, tip-to-tip spacing is one of the more severe issues for single exposure patterning. For the 40 nm pitch vertical lines, an 18 nm nominal tip-to-tip drawn gap resulted in an actual tip-to-tip distance of 29 nm with OPC, while for 32 nm pitch horizontal lines, the tip-to-tip distance with a 14 nm nominal gap went to 31 nm with OPC. These actual tip-to-tip distances define a lower limit of the half-pitch of the metal running in the direction perpendicular to the tip. In this case, the lower limit is around 30 nm. With further optimization of the illumination (discussed in the section on source-mask optimization), the lower limit can be further reduced to around 25 nm. For larger pitches, where conventional illumination can be used, the line tip-to-tip distance is generally larger. For the 24 nm half-pitch lines, with a 20 nm nominally drawn gap, the distance was actually 45 nm, while for 32 nm half-pitch lines, the same nominal gap resulted in a tip-to-tip distance of 34 nm. With OPC, these become 39 nm and 28 nm for 24 nm half-pitch and 32 nm half-pitch, respectively. Enhancement opportunities for EUV patterning Assist features Assist features are often used to help balance asymmetry from non-telecentricity at different slit positions, due to different illumination angles, starting at the 7 nm node, where the pitch is ~ 41 nm for a wavelength ~13.5 nm and NA=0.33, corresponding to k1 ~ 0.5. However, the asymmetry is reduced but not eliminated, since the assist features mainly enhance the highest spatial frequencies, whereas intermediate spatial frequencies, which also affect feature focus and position, are not much affected. The coupling between the primary image and the self images is too strong for the asymmetry to be eliminated by assist features; only asymmetric illumination can achieve this. Assist features may also get in the way of access to power/ground rails. Power rails are expected to be wider, which also limits the effectiveness of using assist features, by constraining the local pitch. Local pitches between 1× and 2× the minimum pitch forbid assist feature placement, as there is simply no room to preserve the local pitch symmetry. In fact, for the application to the two-bar asymmetry case, the optimum assist feature placement may be less than or exceed the two-bar pitch. Depending on the parameter to be optimized (process window area, depth of focus, exposure latitude), the optimum assist feature configuration can be very different, e.g., pitch between assist feature and bar being different from two-bar pitch, symmetric or asymmetric, etc.. At pitches smaller than 58 nm, there is a tradeoff between depth of focus enhancement and contrast loss by assist feature placement. Generally, there is still a focus-exposure tradeoff as the dose window is constrained by the need to have the assist features not print accidentally. An additional concern comes from shot noise; sub-resolution assist features (SRAFs) cause the required dose to be lower, so as not to print the assist features accidentally. This results in fewer photons defining smaller features (see discussion in section on shot noise). As SRAFs are smaller features than primary features and are not supposed to receive doses high enough to print, they are more susceptible to stochastic dose variations causing printing errors; this is particularly prohibitive for EUV, where phase-shift masks may need to be used. Source-mask optimization Due to the effects of non-telecentricity, standard illumination pupil shapes, such as disc or annular, are not sufficient to be used for feature sizes of ~20 nm or below (10 nm node and beyond). Instead certain parts of the pupil (often over 50%) must be asymmetrically excluded. The parts to be excluded depend on the pattern. In particular, the densest allowed lines need to be aligned along one direction and prefer a dipole shape. For this situation, double exposure lithography would be required for 2D patterns, due to the presence of both X- and Y-oriented patterns, each requiring its own 1D pattern mask and dipole orientation. There may be 200–400 illuminating points, each contributing its weight of the dose to balance the overall image through focus. Thus the shot noise effect (to be discussed later) critically affects the image position through focus, in a large population of features. Double- or multiple-patterning would also be required if a pattern consists of sub-patterns which require significantly different optimized illuminations, due to different pitches, orientations, shapes, and sizes. Impact of slit position and aberrations Largely due to the slit shape, and the presence of residual aberrations, the effectiveness of SMO varies across slit position. At each slit position, there are different aberrations and different azimuthal angles of incidence leading to different shadowing. Consequently, there could be uncorrected variations across slit for aberration-sensitive features, which may not be obviously seen with regular line-space patterns. At each slit position, although optical proximity correction (OPC), including the assist features mentioned above, may also be applied to address the aberrations, they also feedback into the illumination specification, since the benefits differ for different illumination conditions. This would necessitate the use of different source-mask combinations at each slit position, i.e., multiple mask exposures per layer. The above-mentioned chromatic aberrations, due to mask-induced apodization, also lead to inconsistent source-mask optimizations for different wavelengths. Pitch-dependent focus windows The best focus for a given feature size varies as a strong function of pitch, polarity, and orientation under a given illumination. At 36 nm pitch, horizontal and vertical darkfield features have more than 30 nm difference of focus. The 34 nm pitch and 48 nm pitch features have the largest difference of best focus regardless of feature type. In the 48–64 nm pitch range, the best focus position shifts roughly linearly as a function of pitch, by as much as 10–20 nm. For the 34–48 nm pitch range, the best focus position shifts roughly linearly in the opposite direction as a function of pitch. This can be correlated with the phase difference between the zero and first diffraction orders. Assist features, if they can fit within the pitch, were found not to reduce this tendency much, for a range of intermediate pitches, or even worsened it for the case of 18–27 nm and quasar illumination. 50 nm contact holes on 100 nm and 150 pitches had best focus positions separated by roughly 25 nm; smaller features are expected to be worse. Contact holes in the 48–100 nm pitch range showed a 37 nm best focus range. The best focus position vs. pitch is also dependent on resist. Critical layers often contain lines at one minimum pitch of one polarity, e.g., darkfield trenches, in one orientation, e.g., vertical, mixed with spaces of the other polarity of the other orientation. This often magnifies the best focus differences, and challenges the tip-to-tip and tip-to-line imaging. Reduction of pupil fill A consequence of SMO and shifting focus windows has been the reduction of pupil fill. In other words, the optimum illumination is necessarily an optimized overlap of the preferred illuminations for the various patterns that need to be considered. This leads to lower pupil fill providing better results. However, throughput is affected below 20% pupil fill due to absorption. Phase shift masks A commonly touted advantage of EUV has been the relative ease of lithography, as indicated by the ratio of feature size to the wavelength multiplied by the numerical aperture, also known as the k1 ratio. An 18 nm metal linewidth has a k1 of 0.44 for 13.5 nm wavelength, 0.33 NA, for example. For the k1 approaching 0.5, some weak resolution enhancement including attenuated phase shift masks has been used as essential to production with the ArF laser wavelength (193 nm), whereas this resolution enhancement is not available for EUV. In particular, 3D mask effects including scattering at the absorber edges distort the desired phase profile. Also, the phase profile is effectively derived from the plane wave spectrum reflected from the multilayer through the absorber rather than the incident plane wave. Without absorbers, near-field distortion also occurs at an etched multilayer sidewall due to the oblique incidence illumination; some light traverses only a limited number of bilayers near the sidewall. Additionally, the different polarizations (TE and TM) have different phase shifts..Fundamentally, a chromeless phase shift mask enables pitch splitting by suppression of the zeroth diffracted order on the mask, but fabricating a high quality phase shift mask for EUV is certainly not a trivial task. One possible way to achieve this is through spatial filtering at the Fourier plane of the mask pattern. At Lawrence Berkeley National Lab, the light of the zeroth order is a centrally obscured system, and the +/-1 diffracted orders will be captured by the clear aperture, providing a functional equivalent to the chromeless phase shift mask while using a conventional binary amplitude mask. EUV photoresist exposure: the role of electrons EUV light generates photoelectrons upon absorption by matter. These photoelectrons in turn generate secondary electrons, which slow down before engaging in chemical reactions. At sufficient doses 40 eV electrons are known to penetrate 180 nm thick resist leading to development. At a dose of 160 μC/cm2, corresponding to 15 mJ/cm2 EUV dose assuming one electron/photon, 30 eV electrons removed 7 nm of PMMA resist after standard development. For a higher 30 eV dose of 380 μC/cm2, equivalent to 36 mJ/cm2 at one electron/photon, 10.4 nm of PMMA resist are removed. These indicate the distances the electrons can travel in resist, regardless of direction. The degree of photoelectron emission from the layer underlying the EUV photoresist has been shown to affect the depth of focus. Unfortunately, hardmask layers tend to increase photoelectron emission, degrading the depth of focus. Electrons from defocused images in the resist can also affect the best focus image. The randomness of the number of secondary electrons is itself a source of stochastic behavior in EUV resist images. The scale length of electron blur itself has a distribution. Intel demonstrated with a rigorous simulation that EUV-released electrons scatter distances larger than 15 nm in EUV resists. The electron blur is also affected by total internal reflection from the top surface of the resist film. Effect of underlying layers Secondary electrons from layers underneath the resist can affect the resist profile as well as pattern collapse. Hence, selection of such both the underlayer and the layer under that layer are important considerations for EUV lithography. Moreover, the electrons from defocused images can aggravate the stochastic nature of the image. Contamination effects Resist outgassing Due to the high efficiency of absorption of EUV by photoresists, heating and outgassing become primary concerns. One well-known issue is contamination deposition on the resist from ambient or outgassed hydrocarbons, which results from EUV- or electron-driven reactions. Organic photoresists outgas hydrocarbons while metal oxide photoresists outgas water and oxygen and metal (in a hydrogen ambient); the last is uncleanable. The carbon contamination is known to affect multilayer reflectivity while the oxygen is particularly harmful for the ruthenium capping layers (relatively stable under EUV and hydrogen conditions) on the EUV multilayer optics. Tin redeposition Atomic hydrogen in the tool chambers is used to clean tin and carbon which deposit on the EUV optical surfaces. Atomic hydrogen is produced by EUV light directly photoionizing H2: hν + H2 → H+ + H + e−. Electrons generated in the above reaction may also dissociate H2 to form atomic hydrogen: e− + H2 → H+ + H + 2e−. The reaction with tin in the light source (e.g., tin on an optical surface in the source) to form volatile SnH4 (stannane) that can be pumped out from the source proceeds via the reaction Sn(s) + 4 H(g) → SnH4(g). The SnH4 can reach the coatings of other EUV optical surfaces, where it redeposits Sn via the reaction SnH4 → Sn(s) + 2 H2(g). Redeposition may also occur by other intermediate reactions. The redeposited Sn might be subsequently removed by atomic-hydrogen exposure. However, overall, the tin cleaning efficiency (the ratio of the removed tin flux from a tin sample to the atomic-hydrogen flux to the tin sample) is less than 0.01%, due to both redeposition and hydrogen desorption, leading to formation of hydrogen molecules at the expense of atomic hydrogen. The tin cleaning efficiency for tin oxide is found roughly twice higher than that of tin (with a native oxide layer of ~2 nm on it). Injecting a small amount of oxygen to the light source may improve the tin cleaning rate. Hydrogen blistering Hydrogen also reacts with metal-containing compounds to reduce them to metal, and diffuses through the silicon and molybdenum in the multilayer, eventually causing blistering. Capping layers that mitigate hydrogen-related damage often reduce reflectivity to well below 70%. Capping layers are known to be permeable to ambient gases including oxygen and hydrogen, as well as susceptible to the hydrogen-induced blistering defects. Hydrogen may also react with the capping layer, resulting in its removal. TSMC proposed some means for mitigating hydrogen blistering defects on EUV masks, which may impact productivity. Tin spitting Hydrogen can penetrate molten tin (Sn), creating hydrogen bubbles inside it. If the bubbles move at the molten tin surface, then it bursts with tin, resulting in tin spreading over a large angle range. This phenomenon is called tin spitting and is one of EUV Collector contamination sources. Resist erosion Hydrogen also reacts with resists to etch or decompose them. Besides photoresist, hydrogen plasmas can also etch silicon, albeit very slowly. Membrane To help mitigate the above effects, the latest EUV tool introduced in 2017, the NXE:3400B, features a membrane that separates the wafer from the projection optics of the tool, protecting the latter from outgassing from the resist on the wafer. The membrane contains layers which absorb DUV and IR radiation, and transmits 85–90% of the incident EUV radiation. There is of course, accumulated contamination from wafer outgassing as well as particles in general (although the latter are out of focus, they may still obstruct light). EUV-induced plasma EUV lithographic systems using EUV light operate in 1–10 Pa hydrogen background gas. The plasma is a source of VUV radiation as well as electrons and hydrogen ions This plasma is known to etch exposed materials. In 2023, a study supported at TSMC was published which indicated net charging by electrons from the plasma as well as from electron emission. The charging was found to occur even outside the EUV exposure area, indicating that the surrounding area had been exposed to electrons. Due to chemical sputtering of carbon by the hydrogen plasma, there can be generation of nanoparticles, which can obstruct the EUV resist exposure. Mask defects Reducing defects on extreme ultraviolet (EUV) masks is currently one of the most critical issues to be addressed for commercialization of EUV lithography. Defects can be buried underneath or within the multilayer stack or be on top of the multilayer stack. Mesas or protrusions form on the sputtering targets used for multilayer deposition, which may fall off as particles during the multilayer deposition. In fact, defects of atomic scale height (0.3–0.5 nm) with 100 nm FWHM can still be printable by exhibiting 10% CD impact. IBM and Toppan reported at Photomask Japan 2015 that smaller defects, e.g., 50 nm size, can have 10% CD impact even with 0.6 nm height, yet remain undetectable. Furthermore, the edge of a phase defect will further reduce reflectivity by more than 10% if its deviation from flatness exceeds 3 degrees, due to the deviation from the target angle of incidence of 84 degrees with respect to the surface. Even if the defect height is shallow, the edge still deforms the overlying multilayer, producing an extended region where the multilayer is sloped. The more abrupt the deformation, the narrower the defect edge extension, the greater the loss in reflectivity. EUV mask defect repair is also more complicated due to the across-slit illumination variation mentioned above. Due to the varying shadowing sensitivity across the slit, the repair deposition height must be controlled very carefully, being different at different positions across the EUV mask illumination slit. Multilayer reflectivity random variations GlobalFoundries and Lawrence Berkeley Labs carried out a Monte Carlo study to simulate the effects of intermixing between the molybdenum (Mo) and silicon (Si) layers in the multilayer that is used to reflect EUV light from the EUV mask. The results indicated high sensitivity to the atomic-scale variations of layer thickness. Such variations could not be detected by wide-area reflectivity measurements but would be significant on the scale of the critical dimension (CD). The local variation of reflectivity could be on the order of 10% for a few nm standard deviation. Multilayer damage Multiple EUV pulses at less than 10 mJ/cm2 could accumulate damage to a Ru-capped Mo/Si multilayer mirror optic element. The angle of incidence was 16° or 0.28 rads, which is within the range of angles for a 0.33 NA optical system. Pellicles Production EUV tools need a pellicle to protect the mask from contamination. Pellicles are normally expected to protect the mask from particles during transport, entry into or exit from the exposure chamber, as well as the exposure itself. Without pellicles, particle adders would reduce yield, which has not been an issue for conventional optical lithography with 193 nm light and pellicles. However, for EUV, the feasibility of pellicle use is severely challenged, due to the required thinness of the shielding films to prevent excessive EUV absorption. Particle contamination would be prohibitive if pellicles were not stable above 200 W, i.e., the targeted power for manufacturing. Heating of the EUV mask pellicle (film temperature up to 750 K for 80 W incident power) is a significant concern, due to the resulting deformation and transmission decrease. ASML developed a 70 nm thick polysilicon pellicle membrane, which allows EUV transmission of 82%; however, less than half of the membranes survived expected EUV power levels. SiNx pellicle membranes also failed at 82 W equivalent EUV source power levels. At target 250 W levels, the pellicle is expected to reach 686 degrees Celsius, well over the melting point of aluminum. Alternative materials need to allow sufficient transmission as well as maintain mechanical and thermal stability. However, graphite, graphene or other carbon nanomaterials (nanosheets, nanotubes) are damaged by EUV due to the release of electrons and also too easily etched in the hydrogen cleaning plasma expected to be deployed in EUV scanners. Hydrogen plasmas can also etch silicon as well. A coating helps improve hydrogen resistance, but this reduces transmission and/or emissivity, and may also affect mechanical stability (e.g., bulging). Wrinkles on pellicles can cause CD nonuniformity due to uneven absorption; this is worse for smaller wrinkles and more coherent illumination, i.e., lower pupil fill. In the absence of pellicles, EUV mask cleanliness would have to be checked before actual product wafers are exposed, using wafers specially prepared for defect inspection. These wafers are inspected after printing for repeating defects indicating a dirty mask; if any are found, the mask must be cleaned and another set of inspection wafers are exposed, repeating the flow until the mask is clean. Any affected product wafers must be reworked. TSMC reported starting limited use of its own pellicle in 2019 and continuing to expand afterwards, and Samsung is planning pellicle introduction in 2022. Hydrogen bulging defects As discussed above, with regard to contamination removal, hydrogen used in recent EUV systems can penetrate into the EUV mask layers. TSMC indicated in its patent that hydrogen would enter from the mask edge. Once trapped, bulge defects or blisters were produced, which could lead to film peeling. These are essentially the blister defects which arise after a sufficient number of EUV mask exposures in the hydrogen environment. TSMC proposed some means for mitigating hydrogen blistering defects on EUV masks, which may impact productivity. EUV stochastic issues EUV lithography is particularly sensitive to stochastic effects. In a large population of features printed by EUV, although the overwhelming majority are resolved, some suffer complete failure to print, e.g. missing holes or bridging lines. A known significant contribution to this effect is the dose used to print. This is related to shot noise, to be discussed further below. Due to the stochastic variations in arriving photon numbers, some areas designated to print actually fail to reach the threshold to print, leaving unexposed defect regions. Some areas may be overexposed, leading to excessive resist loss or crosslinking. The probability of stochastic failure increases exponentially as feature size decreases, and for the same feature size, increasing distance between features also significantly increases the probability. Line cuts which are misshapen are a significant issue due to potential arcing and shorting. Yield requires detection of stochastic failures down to below 1e-12. The tendency to stochastic defects is worse from defocus over a large pupil fill. Multiple failure modes may exist for the same population. For example, besides bridging of trenches, the lines separating the trenches may be broken. This can be attributed to stochastic resist loss, from secondary electrons. The randomness of the number of secondary electrons is itself a source of stochastic behavior in EUV resist images. The coexistence of stochastically underexposed and overexposed defect regions leads to a loss of dose window at a certain post-etch defect level between the low-dose and high-dose patterning cliffs. Hence, the resolution benefit from shorter wavelength is lost. The resist underlayer also plays an important role. This could be due to the secondary electrons generated by the underlayer. Secondary electrons may remove over 10 nm of resist from the exposed edge. The defect level is on the order of 1K/mm2. In 2020, Samsung reported that 5 nm layouts had risks for process defects and had started implementing automated check and fixing. Photon shot noise also leads to stochastic edge placement error. The photon shot noise is augmented to some degree by blurring factors such as secondary electrons or acids in chemically amplified resists; when significant the blur also reduces the image contrast at the edge. An edge placement error (EPE) as large as 8.8 nm was measured for a 48 nm pitch EUV-printed metal pattern. With the natural Poisson distribution due to the random arrival and absorption times of the photons, there is an expected natural dose (photon number) variation of at least several percent 3 sigma, making the exposure process susceptible to stochastic variations. The dose variation leads to a variation of the feature edge position, effectively becoming a blur component. Unlike the hard resolution limit imposed by diffraction, shot noise imposes a softer limit, with the main guideline being the ITRS line width roughness (LWR) spec of 8% (3s) of linewidth. Increasing the dose will reduce the shot noise, but this also requires higher source power. The two issues of shot noise and EUV-released electrons point out two constraining factors: 1) keeping dose high enough to reduce shot noise to tolerable levels, but also 2) avoiding too high a dose due to the increased contribution of EUV-released photoelectrons and secondary electrons to the resist exposure process, increasing the edge blur and thereby limiting the resolution. Aside from the resolution impact, higher dose also increases outgassing and limits throughput, and crosslinking occurs at very high dose levels. For chemically amplified resists, higher dose exposure also increases line edge roughness due to acid generator decomposition. Even with higher absorption at the same dose, EUV has a larger shot noise concern than the ArF (193 nm) wavelength, mainly because it is applied to thinner resists. Due to stochastic considerations, the IRDS 2022 lithography roadmap now acknowledges increasing doses for smaller feature sizes. However, an upper limit to how much dose can be increased is imposed by resist loss. Due to resist thinning with increased dose, EUV stochastic defectivity limits will define a narrow CD or dose window. The thinner resist at higher incident dose reduces absorption, and hence, absorbed dose. EUV resolution will likely be compromised by stochastic effects. Stochastic defect densities have exceeded 1/cm2, at 36 nm pitch. In 2024, an EUV resist exposure by ASML revealed a missing+bridging 32 nm pitch contact hole defect density floor >0.25/cm2 (177 defects per wafer), made worse with thinner resist. ASML indicated 30 nm pitch would not use direct exposure but double patterning. Intel did not use EUV for 30 nm pitch. Pupil fill ratio For pitches less than half-wavelength divided by numerical aperture, dipole illumination is necessary. This illumination fills at most a leaf-shaped area at the edge of the pupil. However, due to 3D effects in the EUV mask, smaller pitches require even smaller portions of this leaf shape. Below 20% of the pupil, the throughput and dose stability begin to suffer. Higher numerical aperture allows a higher pupil fill to be used for the same pitch, but depth of focus is significantly reduced. A larger pupil fill is more susceptible to stochastic fluctuations from point to point in the pupil. Use with multiple-patterning EUV is anticipated to use double-patterning at around 34 nm pitch with 0.33 NA. This resolution is equivalent to '1Y' for DRAM. In 2020, ASML reported that 5 nm M0 layer (30 nm minimum pitch) required double-patterning. In H2 2018, TSMC confirmed that its 5 nm EUV scheme still used multi-patterning, also indicating that mask count did not decrease from its 7 nm node, which used extensive DUV multi-patterning, to its 5 nm node, which used extensive EUV. EDA vendors also indicated the continued use of multi-patterning flows. While Samsung introduced its own 7 nm process with EUV single-patterning, it encountered severe photon shot noise causing excessive line roughness, which required higher dose, resulting in lower throughput. TSMC's 5 nm node uses even tighter design rules. Samsung indicated smaller dimensions would have more severe shot noise. In Intel's complementary lithography scheme at 20 nm half-pitch, EUV would be used only in a second line-cutting exposure after a first 193 nm line-printing exposure. Multiple exposures would also be expected where two or more patterns in the same layer, e.g., different pitches or widths, must use different optimized source pupil shapes. For example, when considering a staggered bar array of 64 nm vertical pitch, changing the horizontal pitch from 64 nm to 90 nm changes the optimized illumination significantly. Source-mask optimization that is based on line-space gratings and tip-to-tip gratings only does not entail improvements for all parts of a logic pattern, e.g., a dense trench with a gap on one side. In 2020, ASML reported that for the 3 nm node, center-to-center contact/via spacings of 40 nm or less would require double- or triple-patterning for some contact/via arrangements. For the 24–36 nm metal pitch, it was found that using EUV as a (second) cutting exposure had a significantly wider process window than as a complete single exposure for the metal layer. However, using a second exposure in the LELE approach for double patterning does not get around the vulnerability to stochastic defects. Multiple exposures of the same mask are also expected for defect management without pellicles, limiting productivity similarly to multiple-patterning. Self-aligned litho-etch-litho-etch (SALELE) is a hybrid SADP/LELE technique whose implementation has started in 7 nm. Self-aligned litho-etch-litho-etch (SALELE) has become an accepted form of double-patterning to be used with EUV. Single-patterning extension: anamorphic high-NA A return to extended generations of single-patterning would be possible with higher numerical aperture (NA) tools. An NA of 0.45 could require retuning of a few percent. Increasing demagnification could avoid this retuning, but the reduced field size severely affects large patterns (one die per 26 mm × 33 mm field) such as the many-core multi-billion transistor 14 nm Xeon chips. by requiring field stitching of two mask exposures. In 2015, ASML disclosed details of its anamorphic next-generation EUV scanner, with an NA of 0.55. These machines cost around USD 360 million. The demagnification is increased from 4× to 8× only in one direction (in the plane of incidence). However, the 0.55 NA has a much smaller depth of focus than immersion lithography. Also, an anamorphic 0.52 NA tool has been found to exhibit too much CD and placement variability for 5 nm node single exposure and multi-patterning cutting. Depth of focus being reduced by increasing NA is also a concern, especially in comparison with multi-patterning exposures using 193 nm immersion lithography: High-NA EUV tools focus horizontal and vertical lines differently from low-NA systems, due to the different demagnfication for horizontal lines. High-NA EUV tools also suffer from obscuration, which can cause errors in the imaging of certain patterns. The first high-NA tools are expected at Intel by 2025 at earliest. For sub-2nm nodes, high-NA EUV systems will be affected by a host of issues: throughput, new masks, polarization, thinner resists, and secondary electron blur and randomness. Reduced depth of focus requires resist thickness less than 30 nm, which in turn increases stochastic effects, due to reduced photon absorption. Electron blur is estimated to be at least ~2 nm, which is enough to thwart the benefit of High-NA EUV lithography. Beyond high-NA, ASML in 2024 announced plans for the development of a hyper-NA EUV tool with an NA beyond 0.55, such as an NA of 0.75 or 0.85. These machines could cost USD 720 million each and are expected to be available in 2030. A problem with Hyper-NA is polarization of the EUV light causing a reduction in image contrast. Beyond EUV wavelength A much shorter wavelength (~6.7 nm) would be beyond EUV, and is often referred to as BEUV (beyond extreme ultraviolet). With current technology, BEUV wavelengths would have worse shot noise effects without ensuring sufficient dose. (The generally accepted 'border' of UV is 10nm below which the (soft) x-ray region begins.) References Further reading Michael Purvis, An Introduction to EUV Sources for Lithography, ASML, STROBE, 2020-09-25. Igor Fomenkov, EUV Source for Lithography in HVM - performance and prospects, ASML Fellow, Source workshop, Amsterdam, 2019-11-05. Related links EUV presents economic challenges Industry mulls 6.7-nm wavelength EUV Lithography (microfabrication) Extreme ultraviolet
Extreme ultraviolet lithography
[ "Chemistry", "Materials_science" ]
12,510
[ "Microtechnology", "Ultraviolet radiation", "Extreme ultraviolet", "Nanotechnology", "Lithography (microfabrication)" ]
2,154,436
https://en.wikipedia.org/wiki/Next-generation%20lithography
Next-generation lithography or NGL is a term used in integrated circuit manufacturing to describe the lithography technologies in development which are intended to replace current techniques. Driven by Moore's law in the semiconductor industries, the shrinking of the chip size and critical dimension continues. The term applies to any lithography method which uses a shorter-wavelength light or beam type than the current state of the art, such as X-ray lithography, electron beam lithography, focused ion beam lithography, and nanoimprint lithography. The term may also be used to describe techniques which achieve finer resolution features from an existing light wavelength. Many technologies once termed "next generation" have entered commercial production, and open-air photolithography, with visible light projected through hand-drawn photomasks, has gradually progressed to deep-UV immersion lithography using optical proximity correction, inverse lithography technology, off-axis illumination, phase-shift masks, double patterning, and multiple patterning. In the late 2010s, the combination of many such techniques was able to achieve features on the order of 20 nm with the 193 nm-wavelength ArF excimer laser in the 14 nm, 10 nm and 7 nm processes, though at the cost of adding processing steps and therefore cost. 13.5 nm extreme ultraviolet (EUV) lithography, long considered a leading candidate for next-generation lithography, began to enter commercial mass-production in 2018. As of 2021, Samsung and TSMC were gradually phasing EUV lithography into their production lines, as it became economical to replace multiple processing steps with single EUV steps. As of the early 2020s, many EUV techniques are still in development and many challenges remain to be solved, positioning EUV lithography as being in transition from "next generation" to "state of the art." Candidates for next-generation lithography beyond EUV include X-ray lithography, electron beam lithography, focused ion beam lithography, nanoimprint lithography, and quantum lithography. Several of these technologies have experienced periods of popularity, but have remained outcompeted by the continuing improvements in photolithography. Electron beam lithography was most popular during the 1970s, but was replaced in popularity by X-ray lithography during the 1980s and early 1990s, and then by EUV lithography from the mid-1990s to the mid-2000s. Focused ion beam lithography has carved a niche for itself in the area of defect repair. Nanoimprint's popularity is rising, and is positioned to succeed EUV as the most popular choice for next-generation lithography, due to its inherent simplicity and low cost of operation as well as its success in the LED, hard disk drive and microfluidics sectors. The rise and fall in popularity of each NGL candidate has largely hinged on its throughput capability and its cost of operation and implementation. Electron beam and nanoimprint lithography are limited mainly by the throughput, while EUV and X-ray lithography are limited by implementation and operation costs. The projection of charged particles (ions or electrons) through stencil masks was also popularly considered in the early 2000s but eventually fell victim to both low throughput and implementation difficulties. Issues Fundamental issues Regardless of whether NGL or photolithography is used, etching of polymer (resist) is the last step. Ultimately the quality (roughness) as well as resolution of this polymer etching limits the inherent resolution of the lithography technique. Next generation lithography also generally makes use of ionizing radiation, leading to secondary electrons which can limit resolution to effectively > 20 nm. Studies have also found that for NGL to reach LER (line edge roughness) objectives ways to control variables such as polymer size, image contrast and resist contrast must be found. Market issues The above-mentioned competition between NGL and the recurring extension of photolithography, where the latter consistently wins, may be more a strategic than a technical matter. If a highly scalable NGL technology were to become readily available, late adopters of leading-edge technology would immediately have the opportunity to leapfrog the current use of advanced but costly photolithography techniques, at the expense of the early adopters of leading-edge technology, who have been the key investors in NGL. While this would level the playing field, it is disruptive enough to the industry landscape that the leading semiconductor companies would probably not want to see it happen. The following example would make this clearer. Suppose company A manufactures down to 28 nm, while company B manufactures down to 7 nm, by extending its photolithography capability by implementing double patterning. If an NGL were deployed for the 5 nm node, both companies would benefit, but company A currently manufacturing at the 28 nm node would benefit much more because it would immediately be able to use the NGL for manufacturing at all design rules from 22 nm down to 7 nm (skipping all the said multiple patterning), while company B would only benefit starting at the 5 nm node, having already spent much on extending photolithography from its 22 nm process down to 7 nm. The gap between Company B, whose customers expect it to advance the leading edge, and Company A, whose customers don't expect an equally aggressive roadmap, will continue to widen as NGL is delayed and photolithography is extended at greater and greater cost, making the deployment of NGL less and less attractive strategically for Company B. With NGL deployment, customers will also be able to demand lower prices for products made at advanced generations. This becomes more clear when considering that each resolution enhancement technique applied to photolithography generally extends the capability by only one or two generations. For this reason, the observation that "optical lithography will live forever" will likely hold, as the early adopters of leading-edge technology will never benefit from highly scalable lithography technologies in a competitive environment. There is therefore great pressure to deploy an NGL as soon as possible, but the NGL ultimately may be realized in the form of photolithography with more efficient multiple patterning, such as directed self-assembly or aggressive cut reduction. See also Computational lithography Nanolithography Quantum Lithography References Lithography (microfabrication)
Next-generation lithography
[ "Materials_science" ]
1,318
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
2,154,572
https://en.wikipedia.org/wiki/Nanobiotechnology
Nanobiotechnology, bionanotechnology, and nanobiology are terms that refer to the intersection of nanotechnology and biology. Given that the subject is one that has only emerged very recently, bionanotechnology and nanobiotechnology serve as blanket terms for various related technologies. This discipline helps to indicate the merger of biological research with various fields of nanotechnology. Concepts that are enhanced through nanobiology include: nanodevices (such as biological machines), nanoparticles, and nanoscale phenomena that occurs within the discipline of nanotechnology. This technical approach to biology allows scientists to imagine and create systems that can be used for biological research. Biologically inspired nanotechnology uses biological systems as the inspirations for technologies not yet created. However, as with nanotechnology and biotechnology, bionanotechnology does have many potential ethical issues associated with it. The most important objectives that are frequently found in nanobiology involve applying nanotools to relevant medical/biological problems and refining these applications. Developing new tools, such as peptoid nanosheets, for medical and biological purposes is another primary objective in nanotechnology. New nanotools are often made by refining the applications of the nanotools that are already being used. The imaging of native biomolecules, biological membranes, and tissues is also a major topic for nanobiology researchers. Other topics concerning nanobiology include the use of cantilever array sensors and the application of nanophotonics for manipulating molecular processes in living cells. Recently, the use of microorganisms to synthesize functional nanoparticles has been of great interest. Microorganisms can change the oxidation state of metals. These microbial processes have opened up new opportunities for us to explore novel applications, for example, the biosynthesis of metal nanomaterials. In contrast to chemical and physical methods, microbial processes for synthesizing nanomaterials can be achieved in aqueous phase under gentle and environmentally benign conditions. This approach has become an attractive focus in current green bionanotechnology research towards sustainable development. Terminology The terms are often used interchangeably. When a distinction is intended, though, it is based on whether the focus is on applying biological ideas or on studying biology with nanotechnology. Bionanotechnology generally refers to the study of how the goals of nanotechnology can be guided by studying how biological "machines" work and adapting these biological motifs into improving existing nanotechnologies or creating new ones. Nanobiotechnology, on the other hand, refers to the ways that nanotechnology is used to create devices to study biological systems. In other words, nanobiotechnology is essentially miniaturized biotechnology, whereas bionanotechnology is a specific application of nanotechnology. For example, DNA nanotechnology or cellular engineering would be classified as bionanotechnology because they involve working with biomolecules on the nanoscale. Conversely, many new medical technologies involving nanoparticles as delivery systems or as sensors would be examples of nanobiotechnology since they involve using nanotechnology to advance the goals of biology. The definitions enumerated above will be utilized whenever a distinction between nanobio and bionano is made in this article. However, given the overlapping usage of the terms in modern parlance, individual technologies may need to be evaluated to determine which term is more fitting. As such, they are best discussed in parallel. Concepts Most of the scientific concepts in bionanotechnology are derived from other fields. Biochemical principles that are used to understand the material properties of biological systems are central in bionanotechnology because those same principles are to be used to create new technologies. Material properties and applications studied in bionanoscience include mechanical properties (e.g. deformation, adhesion, failure), electrical/electronic (e.g. electromechanical stimulation, capacitors, energy storage/batteries), optical (e.g. absorption, luminescence, photochemistry), thermal (e.g. thermomutability, thermal management), biological (e.g. how cells interact with nanomaterials, molecular flaws/defects, biosensing, biological mechanisms such as mechanosensation), nanoscience of disease (e.g. genetic disease, cancer, organ/tissue failure), as well as biological computing (e.g. DNA computing) and agriculture (target delivery of pesticides, hormones and fertilizers. The impact of bionanoscience, achieved through structural and mechanistic analyses of biological processes at nanoscale, is their translation into synthetic and technological applications through nanotechnology. Nanobiotechnology takes most of its fundamentals from nanotechnology. Most of the devices designed for nano-biotechnological use are directly based on other existing nanotechnologies. Nanobiotechnology is often used to describe the overlapping multidisciplinary activities associated with biosensors, particularly where photonics, chemistry, biology, biophysics, nanomedicine, and engineering converge. Measurement in biology using wave guide techniques, such as dual-polarization interferometry, is another example. Applications Applications of bionanotechnology are extremely widespread. Insofar as the distinction holds, nanobiotechnology is much more commonplace in that it simply provides more tools for the study of biology. Bionanotechnology, on the other hand, promises to recreate biological mechanisms and pathways in a form that is useful in other ways. Nanomedicine Nanomedicine is a field of medical science whose applications are increasing. Nanobots The field includes nanorobots and biological machines, which constitute a very useful tool to develop this area of knowledge. In the past years, researchers have made many improvements in the different devices and systems required to develop functional nanorobots – such as motion and magnetic guidance. This supposes a new way of treating and dealing with diseases such as cancer; thanks to nanorobots, side effects of chemotherapy could get controlled, reduced and even eliminated, so some years from now, cancer patients could be offered an alternative to treat such diseases instead of chemotherapy, which causes secondary effects such as hair loss, fatigue or nausea killing not only cancerous cells but also the healthy ones. Nanobots could be used for various therapies, surgery, diagnosis, and medical imaging – such as via targeted drug-delivery to the brain (similar to nanoparticles) and other sites. Programmability for combinations of features such as "tissue penetration, site-targeting, stimuli responsiveness, and cargo-loading" makes such nanobots promising candidates for "precision medicine". At a clinical level, cancer treatment with nanomedicine would consist of the supply of nanorobots to the patient through an injection that will search for cancerous cells while leaving the healthy ones untouched. Patients that are treated through nanomedicine would thereby not notice the presence of these nanomachines inside them; the only thing that would be noticeable is the progressive improvement of their health. Nanobiotechnology may be useful for medicine formulation. "Precision antibiotics" has been proposed to make use of bacteriocin-mechanisms for targeted antibiotics. Nanoparticles Nanoparticles are already widely used in medicine. Its applications overlap with those of nanobots and in some cases it may be difficult to distinguish between them. They can be used to for diagnosis and targeted drug delivery, encapsulating medicine. Some can be manipulated using magnetic fields and, for example, experimentally, remote-controlled hormone release has been achieved this way. On example advanced application under development are "Trojan horse" designer-nanoparticles that makes blood cells eat away – from the inside out – portions of atherosclerotic plaque that cause heart attacks and are the current most common cause of death globally. Artificial cells Artificial cells such as synthetic red blood cells that have all or many of the natural cells' known broad natural properties and abilities could be used to load functional cargos such as hemoglobin, drugs, magnetic nanoparticles, and ATP biosensors which may enable additional non-native functionalities. Other Nanofibers that mimic the matrix around cells and contain molecules that were engineered to wiggle was shown to be a potential therapy for spinal cord injury in mice. Technically, gene therapy can also be considered to be a form of nanobiotechnology or to move towards it. An example of an area of genome editing related developments that is more clearly nanobiotechnology than more conventional gene therapies, is synthetic fabrication of functional materials in tissues. Researcher made C. elegans worms synthesize, fabricate, and assemble bioelectronic materials in its brain cells. They enabled modulation of membrane properties in specific neuron populations and manipulation of behavior in the living animals which might be useful in the study and treatments for diseases such as multiple sclerosis in specific and demonstrates the viability of such synthetic in vivo fabrication. Moreover, such genetically modified neurons may enable connecting external components – such as prosthetic limbs – to nerves. Nanosensors based on e.g. nanotubes, nanowires, cantilevers, or atomic force microscopy could be applied to diagnostic devices/sensors Nanobiotechnology Nanobiotechnology (sometimes referred to as nanobiology) in medicine may be best described as helping modern medicine progress from treating symptoms to generating cures and regenerating biological tissues. Three American patients have received whole cultured bladders with the help of doctors who use nanobiology techniques in their practice. Also, it has been demonstrated in animal studies that a uterus can be grown outside the body and then placed in the body in order to produce a baby. Stem cell treatments have been used to fix diseases that are found in the human heart and are in clinical trials in the United States. There is also funding for research into allowing people to have new limbs without having to resort to prosthesis. Artificial proteins might also become available to manufacture without the need for harsh chemicals and expensive machines. It has even been surmised that by the year 2055, computers may be made out of biochemicals and organic salts. In vivo biosensors Another example of current nanobiotechnological research involves nanospheres coated with fluorescent polymers. Researchers are seeking to design polymers whose fluorescence is quenched when they encounter specific molecules. Different polymers would detect different metabolites. The polymer-coated spheres could become part of new biological assays, and the technology might someday lead to particles which could be introduced into the human body to track down metabolites associated with tumors and other health problems. Another example, from a different perspective, would be evaluation and therapy at the nanoscopic level, i.e. the treatment of nanobacteria (25-200 nm sized) as is done by NanoBiotech Pharma. In vitro biosensors "Nanoantennas" made out of DNA – a novel type of nano-scale optical antenna – can be attached to proteins and produce a signal via fluorescence when these perform their biological functions, in particular for their distinct conformational changes. This could be used for further nanobiotechnology such as various types of nanomachines, to develop new drugs, for bioresearch and for new avenues in biochemistry. Energy It may also be useful in sustainable energy: in 2022, researchers reported 3D-printed nano-"skyscraper" electrodes – albeit micro-scale, the pillars had nano-features of porosity due to printed metal nanoparticle inks – (nanotechnology) that house cyanobacteria for extracting substantially more sustainable bioenergy from their photosynthesis (biotechnology) than in earlier studies. Nanobiology While nanobiology is in its infancy, there are a lot of promising methods that may rely on nanobiology in the future. Biological systems are inherently nano in scale; nanoscience must merge with biology in order to deliver biomacromolecules and molecular machines that are similar to nature. Controlling and mimicking the devices and processes that are constructed from molecules is a tremendous challenge to face for the converging disciplines of nanobiotechnology. All living things, including humans, can be considered to be nanofoundries. Natural evolution has optimized the "natural" form of nanobiology over millions of years. In the 21st century, humans have developed the technology to artificially tap into nanobiology. This process is best described as "organic merging with synthetic". Colonies of live neurons can live together on a biochip device; according to research from Gunther Gross at the University of North Texas. Self-assembling nanotubes have the ability to be used as a structural system. They would be composed together with rhodopsins; which would facilitate the optical computing process and help with the storage of biological materials. DNA (as the software for all living things) can be used as a structural proteomic system – a logical component for molecular computing. Ned Seeman – a researcher at New York University – along with other researchers are currently researching concepts that are similar to each other. Bionanotechnology Distinction from nanobiotechnology Broadly, bionanotechnology can be distinguished from nanobiotechnology in that it refers to nanotechnology that makes use of biological materials/components – it could in principle or does alternatively use abiotic components. It plays a smaller role in medicine (which is concerned with biological organisms). It makes use of natural or biomimetic systems or elements for unique nanoscale structures and various applications that may not be directionally associated with biology rather than mostly biological applications. In contrast, nanobiotechnology uses biotechnology miniaturized to nanometer size or incorporates nanomolecules into biological systems. In some future applications, both fields could be merged. DNA DNA nanotechnology is one important example of bionanotechnology. The utilization of the inherent properties of nucleic acids like DNA to create useful materials or devices – such as biosensors – is a promising area of modern research. DNA digital data storage refers mostly to the use of synthesized but otherwise conventional strands of DNA to store digital data, which could be useful for e.g. high-density long-term data storage that isn't accessed and written to frequently as an alternative to 5D optical data storage or for use in combination with other nanobiotechnology. Membrane materials Another important area of research involves taking advantage of membrane properties to generate synthetic membranes. Proteins that self-assemble to generate functional materials could be used as a novel approach for the large-scale production of programmable nanomaterials. One example is the development of amyloids found in bacterial biofilms as engineered nanomaterials that can be programmed genetically to have different properties. Lipid nanotechnology Lipid nanotechnology is another major area of research in bionanotechnology, where physico-chemical properties of lipids such as their antifouling and self-assembly is exploited to build nanodevices with applications in medicine and engineering. Lipid nanotechnology approaches can also be used to develop next-generation emulsion methods to maximize both absorption of fat-soluble nutrients and the ability to incorporate them into popular beverages. Computing "Memristors" fabricated from protein nanowires of the bacterium Geobacter sulfurreducens which function at substantially lower voltages than previously described ones may allow the construction of artificial neurons which function at voltages of biological action potentials. The nanowires have a range of advantages over silicon nanowires and the memristors may be used to directly process biosensing signals, for neuromorphic computing (see also: wetware computer) and/or direct communication with biological neurons. Other Protein folding studies provide a third important avenue of research, but one that has been largely inhibited by our inability to predict protein folding with a sufficiently high degree of accuracy. Given the myriad uses that biological systems have for proteins, though, research into understanding protein folding is of high importance and could prove fruitful for bionanotechnology in the future. Agriculture In the agriculture industry, engineered nanoparticles have been serving as nano carriers, containing herbicides, chemicals, or genes, which target particular plant parts to release their content. Previously nanocapsules containing herbicides have been reported to effectively penetrate through cuticles and tissues, allowing the slow and constant release of the active substances. Likewise, other literature describes that nano-encapsulated slow release of fertilizers has also become a trend to save fertilizer consumption and to minimize environmental pollution through precision farming. These are only a few examples from numerous research works which might open up exciting opportunities for nanobiotechnology application in agriculture. Also, application of this kind of engineered nanoparticles to plants should be considered the level of amicability before it is employed in agriculture practices. Based on a thorough literature survey, it was understood that there is only limited authentic information available to explain the biological consequence of engineered nanoparticles on treated plants. Certain reports underline the phytotoxicity of various origin of engineered nanoparticles to the plant caused by the subject of concentrations and sizes . At the same time, however, an equal number of studies were reported with a positive outcome of nanoparticles, which facilitate growth promoting nature to treat plant. In particular, compared to other nanoparticles, silver and gold nanoparticles based applications elicited beneficial results on various plant species with less and/or no toxicity. Silver nanoparticles (AgNPs) treated leaves of Asparagus showed the increased content of ascorbate and chlorophyll. Similarly, AgNPs-treated common bean and corn has increased shoot and root length, leaf surface area, chlorophyll, carbohydrate and protein contents reported earlier. The gold nanoparticle has been used to induce growth and seed yield in Brassica juncea. Nanobiotechnology is used in tissue cultures. The administration of micronutrients at the level of individual atoms and molecules allows for the stimulation of various stages of development, initiation of cell division, and differentiation in the production of plant material, which must be qualitatively uniform and genetically homogeneous. The use of nanoparticles of zinc (ZnO NPs) and silver (Ag NPs) compounds gives very good results in the micropropagation of chrysanthemums using the method of single-node shoot fragments. Tools This field relies on a variety of research methods, including experimental tools (e.g. imaging, characterization via AFM/optical tweezers etc.), x-ray diffraction based tools, synthesis via self-assembly, characterization of self-assembly (using e.g. MP-SPR, DPI, recombinant DNA methods, etc.), theory (e.g. statistical mechanics, nanomechanics, etc.), as well as computational approaches (bottom-up multi-scale simulation, supercomputing). Risk management As of 2009, the risks of nanobiotechnologies are poorly understood and in the U.S. there is no solid national consensus on what kind of regulatory policy principles should be followed. For example, nanobiotechnologies may have hard to control effects on the environment or ecosystems and human health. The metal-based nanoparticles used for biomedical prospectives are extremely enticing in various applications due to their distinctive physicochemical characteristics, allowing them to influence cellular processes at the biological level. The fact that metal-based nanoparticles have high surface-to-volume ratios makes them reactive or catalytic. Due to their small size, they are more likely to be able to penetrate biological barriers such as cell membranes and cause cellular dysfunction in living organisms. Indeed, the high toxicity of some transition metals can make it challenging to use mixed oxide NPs in biomedical uses. It triggers adverse effects on organisms, causing oxidative stress, stimulating the formation of ROS, mitochondrial perturbation, and the modulation of cellular functions, with fatal results in some cases. Bonin notes that "Nanotechnology is not a specific determinate homogenous entity, but a collection of diverse capabilities and applications" and that nanobiotechnology research and development is – as one of many fields – affected by dual-use problems. See also Biomimicry Colloidal gold Genome editing (bacteria, (micro-borgs)) Gold nanoparticle Nanomedicine Nanobiomechanics Nanoparticle–biomolecule conjugate Nanosubmarine Nanozymes References External links What is Bionanotechnology?—a video introduction to the field Nanobiotechnology in Orthopaedic Nanotechnology Biotechnology Nanomedicine
Nanobiotechnology
[ "Materials_science", "Engineering", "Biology" ]
4,320
[ "Materials science", "Biotechnology", "Nanomedicine", "nan", "Nanotechnology" ]
2,154,737
https://en.wikipedia.org/wiki/Detention%20basin
A detention basin or retarding basin is an excavated area installed on, or adjacent to, tributaries of rivers, streams, lakes or bays to protect against flooding and, in some cases, downstream erosion by storing water for a limited period of time. These basins are also called dry ponds, holding ponds or dry detention basins if no permanent pool of water exists. Detention ponds that are designed to permanently retain some volume of water at all times are called retention basins. In its basic form, a detention basin is used to manage water quantity while having a limited effectiveness in protecting water quality, unless it includes a permanent pool feature. Functions and design Detention basins are storm water best management practices that provide general flood protection and can also control extreme floods such as a 1 in 100-year storm event. The basins are typically built during the construction of new land development projects including residential subdivisions or shopping centers. The ponds help manage the excess urban runoff generated by newly constructed impervious surfaces such as roads, parking lots and rooftops. A basin functions by allowing large flows of water to enter but limits the outflow by having a small opening at the lowest point of the structure. The size of this opening is determined by the capacity of underground and downstream culverts and washes to handle the release of the contained water. Frequently the inflow area is constructed to protect the structure from some types of damage. Offset concrete blocks in the entrance spillways are used to reduce the speed of entering flood water. These structures may also have debris drop vaults to collect large rocks. These vaults are deep holes under the entrance to the structure. The holes are wide enough to allow large rocks and other debris to fall into the holes before they can damage the rest of the structure. These vaults must be emptied after each storm event. Research has shown that detention basins built with real-time control of the outflow from the basin are significantly more effective at retaining total suspended solids and associated contaminants, such as heavy metals, when compared to basins without control. Extended detention basin A variant basin design called an extended detention dry basin can limit downstream erosion and control of some pollutants such as suspended solids. This basin type differs from a retention basin, also known as a "wet pond," which includes a permanent pool of water. While basic detention ponds are typically designed to empty within 6 to 12 hours after a storm, extended detention (ED) dry basins improve the basic detention design by lengthening the storage time, for example, to 24 or 48 hours. Longer detention allows for more settling of suspended solids, resulting in higher-quality water. See also Best management practice for water pollution Groundwater banking Retention basin Stream restoration Sustainable urban drainage systems Sustainable Flood Retention Basin Balancing lake References External links Detention vs. retention - Project Brays (Harris County, Texas) Maintaining Your BMPs: A Guidebook for Private Owners & Operators in Northern Virginia Environmental engineering Hydraulic engineering Hydrology Infrastructure Ponds Water treatment Stormwater management Water supply
Detention basin
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
598
[ "Hydrology", "Water treatment", "Stormwater management", "Chemical engineering", "Water supply", "Water pollution", "Physical systems", "Construction", "Hydraulics", "Civil engineering", "Environmental engineering", "Water technology", "Hydraulic engineering", "Infrastructure" ]
13,229,499
https://en.wikipedia.org/wiki/Bochner%20identity
In mathematics — specifically, differential geometry — the Bochner identity is an identity concerning harmonic maps between Riemannian manifolds. The identity is named after the American mathematician Salomon Bochner. Statement of the result Let M and N be Riemannian manifolds and let u : M → N be a harmonic map. Let du denote the derivative (pushforward) of u, ∇ the gradient, Δ the Laplace–Beltrami operator, RiemN the Riemann curvature tensor on N and RicM the Ricci curvature tensor on M. Then See also Bochner's formula References External links Differential geometry Mathematical identities
Bochner identity
[ "Mathematics" ]
130
[ "Mathematical theorems", "Mathematical identities", "Mathematical problems", "Algebra" ]
13,230,920
https://en.wikipedia.org/wiki/Furstenberg%27s%20proof%20of%20the%20infinitude%20of%20primes
In mathematics, particularly in number theory, Hillel Furstenberg's proof of the infinitude of primes is a topological proof that the integers contain infinitely many prime numbers. When examined closely, the proof is less a statement about topology than a statement about certain properties of arithmetic sequences. Unlike Euclid's classical proof, Furstenberg's proof is a proof by contradiction. The proof was published in 1955 in the American Mathematical Monthly while he was still an undergraduate student at Yeshiva University. Furstenberg's proof Define a topology on the integers , called the evenly spaced integer topology, by declaring a subset U ⊆  to be an open set if and only if it is a union of arithmetic sequences S(a, b) for a ≠ 0, or is empty (which can be seen as a nullary union (empty union) of arithmetic sequences), where Equivalently, U is open if and only if for every x in U there is some non-zero integer a such that S(a, x) ⊆ U. The axioms for a topology are easily verified: ∅ is open by definition, and is just the sequence S(1, 0), and so is open as well. Any union of open sets is open: for any collection of open sets Ui and x in their union U, any of the numbers ai for which S(ai, x) ⊆ Ui also shows that S(ai, x) ⊆ U. The intersection of two (and hence finitely many) open sets is open: let U1 and U2 be open sets and let x ∈ U1 ∩ U2 (with numbers a1 and a2 establishing membership). Set a to be the least common multiple of a1 and a2. Then S(a, x) ⊆ S(ai, x) ⊆ Ui. This topology has two notable properties: Since any non-empty open set contains an infinite sequence, a finite non-empty set cannot be open; put another way, the complement of a finite non-empty set cannot be a closed set. The basis sets S(a, b) are both open and closed: they are open by definition, and we can write S(a, b) as the complement of an open set as follows: The only integers that are not integer multiples of prime numbers are −1 and +1, i.e. Now, by the first topological property, the set on the left-hand side cannot be closed. On the other hand, by the second topological property, the sets S(p, 0) are closed. So, if there were only finitely many prime numbers, then the set on the right-hand side would be a finite union of closed sets, and hence closed. This would be a contradiction, so there must be infinitely many prime numbers. Topological properties The evenly spaced integer topology on is the topology induced by the inclusion , where is the profinite integer ring with its profinite topology. It is homeomorphic to the rational numbers with the subspace topology inherited from the real line, which makes it clear that any finite subset of it, such as , cannot be open. Notes References Keith Conrad https://kconrad.math.uconn.edu/blurbs/ugradnumthy/primestopology.pdf External links Furstenberg's proof that there are infinitely many prime numbers at Everything2 Article proofs General topology Prime numbers
Furstenberg's proof of the infinitude of primes
[ "Mathematics" ]
701
[ "General topology", "Prime numbers", "Mathematical objects", "Article proofs", "Topology", "Numbers", "Number theory" ]
13,232,165
https://en.wikipedia.org/wiki/High-redundancy%20actuation
High-redundancy actuation (HRA) is a new approach to fault-tolerant control in the area of mechanical actuation. Overview The basic idea is to use a lot of small actuation elements, so that a fault of one element has only a minor effect on the overall system. This way, a High Redundancy Actuator can remain functional even after several elements are at fault. This property is also called graceful degradation. Fault-tolerant operation in the presence of actuator faults requires some form of redundancy. Actuators are essential, because they are used to keep the system stable and to bring it into the desired state. Both requires a certain amount of power or force to be applied to the system. No control approach can work unless the actuators produce this necessary force. So the common solution is to err on the side of safety by over-actuation: much more control action than strictly necessary is built into the system. For critical systems, the normal approach involves straightforward replication of the actuators. Often three or four actuators are used in parallel for aircraft flight control systems, even if one would be sufficient from a control point of view. So if one actuator fails, the remaining actuator can always keep the system operation. While this approach is certainly successful, it also makes the system expensive, heavy and ineffective. Inspiration of high-redundancy actuation The idea of the high-redundancy actuation (HRA) is inspired by the human musculature. A muscle is composed of many individual muscle cells, each of which provides only a minute contribution to the force and the travel of the muscle. These properties allow the muscle as a whole to be highly resilient to damage of individual cells. Technical realisation The aim of high redundancy actuation is not to produce man-made muscles, but to use the same principle of cooperation in technical actuators to provide intrinsic fault tolerance. To achieve this, a high number of small actuator elements are assembled in parallel and in series to form one actuator (see Series and parallel circuits). Faults within the actuator will affect the maximum capability, but through robust control, full performance can be maintained without either adaptation or reconfiguration. Some form of condition monitoring is necessary to provide warnings to the operator calling for maintenance. But this monitoring has no influence on the system itself, unlike in adaptive methods or control reconfiguration, which simplifies the design of the system significantly. The HRA is an important new approach within the overall area of fault-tolerant control, using concepts of reliability engineering on a mechanical level. When applicable, it can provide actuators that have graceful degradation, and that continue to operate at close to nominal performance even in the presence of multiple faults in the actuator elements. Using actuation elements in series An important feature of the high-redundancy actuation is that the actuator elements are connected both in parallel and in series. While the parallel arrangement is commonly used, the configuration in series is rarely employed, because it is perceived to be less efficient. However, there is one fault that is difficult to deal with in a parallel arrangement: the locking up of one actuator element. Because parallel actuator elements always have the same extension, one locked-up element can render the whole assembly useless. It is possible to mitigate this by guarding the elements against locking or by limiting the force exerted by a single element. But these measures reduce both the effectiveness of the system and introduce new points of failure. The analysis of the serial configuration shows that it remains operational when one element is locked-up. This fact is important for the High Redundancy Actuator, as fault tolerance is required for different fault types. The goal of the HRA project is to use parallel and serial actuator elements to accommodate both the blocking and the inactivity (loss of force) of an element. Available technology The basic idea of high-redundancy actuation is technology agnostic: it should be applicable to a wide range of actuator technology, including different kinds of linear actuators and rotational actuators. However, initial experiments are performed with electric actuators, especially with electromechanical and electromagnetic technology. Compared to pneumatic actuators, the electrical drive allow a much finer control of position and force. Further reading M. Blanke, M. Kinnaert, J. Lunze, M. Staroswiecki, J. Schröder: "Diagnosis and Fault-Tolerant Control", . Springer, New York, 2006. S. Chen, G. Tao, and S. M. Joshi: "On matching conditions for adaptive state tracking control of systems with actuator failures", in IEEE Transactions on Automatic Control, vol. 47, no. 3, pp. 473–478, 2002. X. Du, R. Dixon, R.M. Goodall, and A.C. Zolotas: "LQG Control for a Highly Redundant Actuator", in Preprint of the IFAC Conference for Advanced Intelligent Mechatronics (AIM), Zurich, 2007. X. Du, R. Dixon, R.M. Goodall, and A.C. Zolotas: "Assessment Of Strategies For Control Of High Redundancy Actuators", ACTUATOR 2006, Germany. X. Du, R. Dixon, R.M. Goodall, and A.C. Zolotas: "Modelling And Control Of A Highly Redundant Actuator", CONTROL 2006, Scotland, 2006. T. Steffen, J. Davies, R. Dixon, R.M. Goodall and A.C. Zolotas: "Using a Series of Moving Coils as a High Redundancy Actuator", in Preprint of the IFAC Conference for Advanced Intelligent Mechatronics (AIM), Zurich, 2007. Arun Manohar Gollapudi, V. Velagapudi, S. Korla: "Modeling and simulation of a high-redundancy direct-driven linear electromechanical actuator for fault-tolerance under various fault conditions", Engineering Science and Technology, an International Journal, Volume 23, Issue 5, October 2020, Pages 1171-1181. External links home page of the initial project Control engineering
High-redundancy actuation
[ "Engineering" ]
1,330
[ "Control engineering" ]
13,235,454
https://en.wikipedia.org/wiki/International%20Mass%20Spectrometry%20Foundation
The International Mass Spectrometry Foundation (IMSF) is a non-profit scientific organization in the field of mass spectrometry. It operates the International Mass Spectrometry Society, which consists of 37 member societies and sponsors the International Mass Spectrometry Conference that is held once every two years. Aims The foundation has four aims: organizing international conferences and workshops in mass spectrometry improving mass spectrometry education standardizing terminology in the field aiding in the dissemination of mass spectrometry through publications Conferences Before the formation of the IMSF, the first International Mass Spectrometry Conference was held in London in 1958 and 41 papers were presented. Since then, conferences were held every three years until 2012, and every two years since. Conference proceedings are published in a book series, Advances in Mass Spectrometry, which is the oldest continuous series of publications in mass spectrometry. The International Mass Spectrometry Society evolved from this series of International Mass Spectrometry Conferences. The IMSF was officially registered in the Netherlands in 1998 following an agreement at the 1994 conference. Past meetings were held in these locations: Awards The society sponsors several awards including the Curt Brunnée Award for achievements in instrumentation by a scientist under 45 years of age, the Thomson Medal Award for achievements in mass spectrometry, as well as travel awards and student paper awards: Curt Brunnée Award winners: See also American Society for Mass Spectrometry British Mass Spectrometry Society Canadian Society for Mass Spectrometry List of female mass spectrometrists References External links Chemistry societies Mass spectrometry Organisations based in Gelderland Scientific organisations based in the Netherlands
International Mass Spectrometry Foundation
[ "Physics", "Chemistry" ]
349
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "nan", "Chemistry societies", "Matter" ]
15,910,901
https://en.wikipedia.org/wiki/Cryptotope
A cryptotope is an antigenic site or epitope hidden in a protein or virion by surface subunits. Cryptotopes are antigenically active only after the dissociation of protein aggregates and virions. Some infectious pathogens are known to escape immunological targeting by B-cells by masking antigen-binding sites as cryptotopes. A cryptotope can also be referred to as a cryptic epitope. Cryptotopes are becoming important for HIV vaccine research as a number of studies have shown that cryptic epitopes can be revealed or exposed when HIV gp120 binds to CD4. References Immune system
Cryptotope
[ "Biology" ]
133
[ "Immune system", "Organ systems" ]
7,111,308
https://en.wikipedia.org/wiki/Tubular%20NDT
Tubular NDT (nondestructive testing) is the application of various technologies to detect anomalies such as corrosion and manufacturing defects in metallic tubes. Tubing can be found in such equipment as boilers and heat exchangers. To carry out an examination in situ (i.e., examination of the tubes in position, where they are installed), a manhole cover is usually removed to allow a technician access to the tubes. Alternatively, a tube bundle may be removed from a heat-exchanger and transported by forklift to a maintenance area for easier access. The usual means of examination is to insert some type of probe into the tubes, one at a time, while data is recorded for later interpretation. The technologies listed below (ECT, RFT, IRIS, and MFL) are all able to detect defects on the outside of the tube from the inside. The tubes must be clean enough to allow passage of the probe: deposits of debris, rust, or scale may have to be removed by chemicals or pressure washing. In water-tube boilers, the tubes may be examined from the outside when the boiler is shut down, often using ultrasonic testing. Common methods Eddy-current testing (ECT) is commonly used on non-[ferromagnetic] metals and alloys such as copper, brass, and copper nickel. Variations on ECT are partial saturation ECT and magnetic biased ECT, both of which use magnets to allow ECT to operate in lightly ferromagnetic materials or in thin-wall ferromagnetic tubes. Remote field testing (RFT) is used on [ferromagnetic] materials such as carbon steel. IRIS (Internal rotary inspection system) can be used on all types of metal tubes. IRIS is very slow, but very accurate, and is often used as a back-up to a remote field examination. Magnetic flux leakage (MFL) testing is used on carbon steel tubes, although it tends to be less accurate than remote field testing. References Sources Heat exchangers: Monitoring and maintenance H. Sadek, NDE technologies for the examination of heat exchangers and boiler tubes – principles, advantages and limitations, PDF, 2.1 MB. Fathi E. Al-Qadeeb, Tubing Inspection Using Multiple NDT Techniques, PDF, 118 kB. Nondestructive testing
Tubular NDT
[ "Materials_science" ]
484
[ "Nondestructive testing", "Materials testing" ]
7,111,771
https://en.wikipedia.org/wiki/UVB-induced%20apoptosis
UVB-induced apoptosis is the programmed cell death of cells that become damaged by ultraviolet rays. This is notable in skin cells, to prevent melanoma. Some studies have shown that exercise accelerates this process. Description Apoptosis is a physiological process, that promotes the active suicide of cells, resulting in an advantage, unlike necrosis which occurs from trauma. In the average human adult it is estimated that 50 to 70 billion cells die each day from apoptosis. One of the largest promoters of apoptosis is exposure to ultraviolet (UV) light. While UV light is essential to human life it can also cause harm by inducing cancer, immunosuppression, photoaging, inflammation, and cell death. Of the various components of sunlight, ultraviolet radiation B (UVB) (290-320 nm) is considered to be the most harmful. This type of radiation acts primarily on the epidermis, and in particular the keratinocytes. Keratinocytes are known to form a barrier to provide a layer of protection within the skin against environmental hazards. Within the epidermis, in addition to the keratinocytes, there are melanocytes (melanin producing cells). These cells produce pigment that provides the keratinocytes with protection against UVB radiation. Once the keratinocytes have been damaged irreparably as a result of UVB radiation, they are marked for destruction by apoptosis to eliminate them as they are potentially mutagenic cells. Failure of the body to remove DNA damaged cells increases the risk of skin cancer. One consequence of acute UVB exposure is the occurrence of sunburn cells, keratinocytes, within the epidermis. It has been found that when exposed to UVB radiation the DNA in an epidermis cell undergoes fragmentation, which could result in the growth of tumor cells. To prevent this the cell undergoes a morphological change into keratinocytes. These keratinocytes exhibit the capacity to release TNF-α (tumor necrosis factor - alpha) that stop the growth of the tumor by promoting the death of the cell. If keratinocyte cells have been damaged by UVB radiation, the term "sunburn cell" or "SBC formation" is used. It is thought that when keratinocytes have been damaged by UVB radiation, this triggers a series of processes, caused in part by damage to the DNA. A study indicates that it may be at the mitochondria where the various processes (ligan-dependent receptor activation and cytosolic signaling) pathways are activated by the production of reactive oxygen species (ROS) that may direct the destruction of keratinocytes through apoptosis by activating caspase. As a result of increased exposure to an oxygen-reduced environment, this promotes the development of ROS thereby linking the incidence of ROS with keratinocytes and making these cells more sensitive to UVB radiation. A study by Tobi et al., in 2002 has linked ROS with cytotoxicity, apoptosis, mutations, and carcinogenesis. Mild hypoxia (1-5%) sensitized keratinocytes to UVB-induced apoptosis, while protecting melanocytes from environmental stresses. A study by Mark Schotanus, et al., has demonstrated that in addition to potential damage to keratinocytes and melanocytes, exposure to UVB radiation may also produce a loss of potassium ions, which may then cause the activation of apoptotic pathways in lymphocytes and neuronal cells as opposed to keratinocytes and melanocytes. It has been demonstrated that incubation of lymphocytes and neuronal cells in elevated concentrations of potassium ions provides protection from apoptosis. This phenomenon was demonstrated in tears, which have higher levels of potassium ions, and bathe cells of the eye and therefore provides protection from UVB radiation. Reduction of potassium ions promotes apoptosis and the synthesis of initiator caspase-8 and the effector caspase-3. A study reported in the International Journal of Molecular Sciences in 2012; 13(3), pages 2560-2675, published February 28, 2012 by Terrerence J. Piva, Catherine M. Davern, Paula M. Hall, Clay M. Winterford and Kay A.O. Ellem, that while caspase may play a role in apoptosis, it is specifically not as a result of caspase-3. It was reported in that study that the process of apoptosis includes: "detachment from the substrate, followed by loss of specialized membrane structures such as microvilli. The cell then undergoes rounding, shrinkage and blabbing before condensation of chromatin is observed in the nucleus. After a period of time the cell fragments into apoptotic bodies, which in vivo are engulfed and degraded by phagocytic cells such as macrophages" Caspase I is involved in the aforementioned cell membrane activity but not caspase-3. UVB-induced apoptosis pathway The sequence of events that leads to apoptosis is multifaceted and complex. Despite the simple concept of apoptosis, the sequence of events that leads to it and other conditions that attempt to counter act it can be very cumbersome. Since apoptosis is a last resort alternative, it takes the initiation of multiple other genes (ING2, p53, or Ras subfamily) expressed before the cell is finally programmed for death. In addition, genes like Survivin can attempt to suppress apoptosis. References Free Radical Biology and Medicine, Vol 52, Issue 6, 15 March 2012, Pages 1111-1120. Skin mild hypoxia enhances killing of UVB-damaged keratinocytes through relative oxygen species-mediated apoptosis requiring Nova and Bim. Kris Kys, Hannaelore Maes, Graieia Andrei, Rober Snoeck, Maria Garmyn, Partiizia Agostinis Experimental Eye Research, Vol 93, Issue 5, November 2011, pages 735-740. Stratified Corneal timbal epithelial cells are protected from UVB-induced apoptosis by elevated extracellular potassium ions. Mark Schotanus, Leah R. Koetje, Rachel E. Van Dyken, John L. Ubels Methods 2008; 44; pages 205-221, Apoptosis and necrosis, detection, discrimination and phagocytosis, Krysko D.V. Berghe T.V. D. Herde, K., Vandenabeele P External links LiveScience article on the subject Cell signaling Immune system Programmed cell death
UVB-induced apoptosis
[ "Chemistry", "Biology" ]
1,394
[ "Immune system", "Signal transduction", "Senescence", "Organ systems", "Programmed cell death" ]
7,112,300
https://en.wikipedia.org/wiki/Barberpole%20illusion
The barberpole illusion is a visual illusion that reveals biases in the processing of visual motion in the human brain. This visual illusion occurs when a diagonally striped pole is rotated around its vertical axis (horizontally), it appears as though the stripes are moving in the direction of its vertical axis (downwards in the case of the animation to the right) rather than around it. History In 1929, psychologist J.P. Guilford informally noted a paradox in the perceived motion of stripes on a rotating barber pole. The barber pole turns in place on its vertical axis, but the stripes appear to move upwards rather than turning with the pole. Guilford tentatively attributed the phenomenon to eye movements, but acknowledged the absence of data on the question. In 1935, Hans Wallach published a comprehensive series of experiments related to this topic, but since the article was in German it was not immediately known to English-speaking researchers. An English summary of the research was published in 1976 and a complete English translation of the 1935 paper was published by Sophie Wuerger, Robert Shapley, and Nava Rubin in 1996. Wallach's analysis focused on the interaction between the terminal points of the diagonal lines and the implicit aperture created by the edges of the pole. Explanation This illusion occurs because a bar or contour within a frame of reference provides ambiguous information about its "real" direction of movement. The actual motion of the line has many possibilities. The shape of the aperture thus tends to determine the perceived direction of motion for an otherwise identically moving contour. A vertically elongated aperture makes vertical motion dominant whereas a horizontally elongated aperture makes horizontal motion dominant. In the case of a circular or square aperture, the perceived direction of movement is usually orthogonal to the orientation of the stripes (diagonal, in this case). The perceived direction of movement relates to the termination of the line's end points within the inside border of the occluder. The vertical aperture, for instance, has longer edges at the vertical orientation, creating a larger number of terminators unambiguously moving vertically. This stronger motion signal forces us to perceive vertical motion. Functionally, this mechanism has evolved to ensure that we perceive a moving pattern as a rigid surface moving in one direction. Individual motion-sensitive neurons in the visual system have only limited information, as they see only a small portion of the visual field (a situation referred to as the "aperture problem"). In the absence of additional information the visual system prefers the slowest possible motion: i.e., motion orthogonal to the moving line. The neurons which may correspond to perceiving barber-pole-like patterns have been identified in the visual cortex of ferrets. Auditory analogue A similar effect occurs in the Shepard's tone, which is an auditory illusion. See also Screw (simple machine) – screws convert rotational motion to linear motion and exhibit the same mechanic Motion perception Auditory illusion References Notes External links Barpole effect animation and explanation. Optical illusions
Barberpole illusion
[ "Physics" ]
604
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
7,115,718
https://en.wikipedia.org/wiki/Ball-and-stick%20model
In chemistry, the ball-and-stick model is a molecular model of a chemical substance which displays both the three-dimensional position of the atoms and the bonds between them. The atoms are typically represented by spheres, connected by rods which represent the bonds. Double and triple bonds are usually represented by two or three curved rods, respectively, or alternately by correctly positioned sticks for the sigma and pi bonds. In a good model, the angles between the rods should be the same as the angles between the bonds, and the distances between the centers of the spheres should be proportional to the distances between the corresponding atomic nuclei. The chemical element of each atom is often indicated by the sphere's color. In a ball-and-stick model, the radius of the spheres is usually much smaller than the rod lengths, in order to provide a clearer view of the atoms and bonds throughout the model. As a consequence, the model does not provide a clear insight about the space occupied by the model. In this aspect, ball-and-stick models are distinct from space-filling (calotte) models, where the sphere radii are proportional to the Van der Waals atomic radii in the same scale as the atom distances, and therefore show the occupied space but not the bonds. Ball-and-stick models can be physical artifacts or virtual computer models. The former are usually built from molecular modeling kits, consisting of a number of coil springs or plastic or wood sticks, and a number of plastic balls with pre-drilled holes. The sphere colors commonly follow the CPK coloring. Some university courses on chemistry require students to buy such models as learning material. History In 1865, German chemist August Wilhelm von Hofmann was the first to make ball-and-stick molecular models. He used such models in lecture at the Royal Institution of Great Britain. Specialist companies manufacture kits and models to order. One of the earlier companies was Woosters at Bottisham, Cambridgeshire, UK. Besides tetrahedral, trigonal and octahedral holes, there were all-purpose balls with 24 holes. These models allowed rotation about the single rod bonds, which could be both an advantage (showing molecular flexibility) and a disadvantage (models are floppy). The approximate scale was 5 cm per ångström (0.5 m/nm or 500,000,000:1), but was not consistent over all elements. The Beeverses Miniature Models company in Edinburgh (now operating as Miramodus) produced small models beginning in 1961 using PMMA balls and stainless steel rods. In these models, the use of individually drilled balls with precise bond angles and bond lengths enabled large crystal structures to be accurately created in a light and rigid form. See also VSEPR theory References Molecular modelling
Ball-and-stick model
[ "Chemistry" ]
560
[ "Theoretical chemistry", "Molecular modelling", "Molecular physics" ]
7,116,205
https://en.wikipedia.org/wiki/APM%2008279%2B5255
APM 08279+5255 is a very distant, broad absorption line quasar located in the constellation Lynx. It is magnified and split into multiple images by the gravitational lensing effect of a foreground galaxy through which its light passes. It appears to be a giant elliptical galaxy with a supermassive black hole and associated accretion disk. It possesses large regions of hot dust and molecular gas, as well as regions with starburst activity. Gravitational lensing APM 08279+5255 was initially identified as a quasar in 1998 during an Automatic Plate Measuring Facility (APM) survey to find carbon stars in the galactic halo. The combination of its high redshift (z=3.87) and brightness (particularly in the infrared) made it the most luminous object yet seen in the universe. It was suspected of being a gravitationally lensed object, with its luminosity magnified. Observations in the infrared with the NICMOS high-resolution camera on board the Hubble Space Telescope (HST) showed that the source was composed of three discrete images. Even accounting for the magnification, the quasar is an extremely powerful object, with a luminosity of 1014 to 1015 times the luminosity of the sun. Subsequent observations with the Hubble Space Telescope Imaging Spectrograph confirmed the presence of a third faint image between the two brighter images. Each component has the same spectral energy distribution and is an image of the quasar. Gravitational lensed systems with odd numbers of images are extremely rare; most contain two or four. Initially the magnification due to gravitational lensing was thought to be large, in the range of 40 to 90 times. After detailed observations at many wavelengths, the best model of the lensing galaxy is a tilted spiral galaxy. This gives a magnification of about 4. The additional observations led to a revised redshift of 3.911. Galactic structure APM 08279+5255 is a bright source at almost all wavelengths and has become one of the most studied of distant sources. Using interferometry it has been mapped in X-ray with the AXAF CCD Imaging Spectrometer on the Chandra X-ray Observatory, in infrared with the Hubble Space Telescope, and in radio with the Very Long Baseline Array. Measurements with the IRAM Plateau de Bure Interferometer and other instruments looked at the distribution of molecules such as CO, CN, HCN, and HCO+ as well as atomic carbon. From these observations APM 08279+5255 is in a giant elliptical galaxy with large amounts of gas, dust, and an active galactic nucleus (AGN) at its core. The AGN is radio-quiet with no evidence for a relativistic jet. It is powered by one of the largest known supermassive black holes: 23 billion solar masses (based on the molecular disk velocities); or alternatively 10 billion solar masses (based on reverberation mapping). The black hole is surrounded by an accretion disk of material spiraling into it, a few parsecs in size. Further out is a dust torus, a doughnut shaped cloud of dust and gas with a radius of about 100 parsecs. Both the accretion disk and dust torus appear to be almost face-on to us. The radiation from the molecular gas is coming from a flattened disk at the center of the galaxy with a radius of 550 pc. This is also the starburst region of the galaxy. The gas is heated both by activity in the AGN and by the newly forming stars. APM is an ultra-luminous infrared galaxy (ULIRG). Its high redshift shifts the far-infrared spectrum into millimeter wavelengths where it can be observed from observatories on the ground. In 2008 and 2009 the intensities of its water vapor spectral lines were measured using the millimeter wave spectrometer Z-Spec at the Caltech Submillimeter Observatory. Comparing the spectrum to that of Markarian 231, another ULIRG, showed that it had 50 times the water vapor of that galaxy. This made it the largest mass of water in the known universe—100 trillion times more water than that held in all of Earth's oceans combined. Its discovery shows that water has been prevalent in the known universe for nearly its entire existence; the radiation was emitted 1.6 billion years after the Big Bang. Gallery References External links Lynx (constellation) Gravitationally lensed quasars IRAS catalogue objects Starburst galaxies Supermassive black holes Astronomical objects discovered in 1998
APM 08279+5255
[ "Physics", "Astronomy" ]
952
[ "Black holes", "Lynx (constellation)", "Unsolved problems in physics", "Supermassive black holes", "Constellations" ]
14,383,139
https://en.wikipedia.org/wiki/Allotropes%20of%20sulfur
The element sulfur exists as many allotropes. In number of allotropes, sulfur is second only to carbon. In addition to the allotropes, each allotrope often exists in polymorphs (different crystal structures of the same covalently bonded Sn molecules) delineated by Greek prefixes (α, β, etc.). Furthermore, because elemental sulfur has been an item of commerce for centuries, its various forms are given traditional names. Early workers identified some forms that have later proved to be single or mixtures of allotropes. Some forms have been named for their appearance, e.g. "mother of pearl sulfur", or alternatively named for a chemist who was pre-eminent in identifying them, e.g. "Muthmann's sulfur I" or "Engel's sulfur". The most commonly encountered form of sulfur is the orthorhombic polymorph of , which adopts a puckered ring – or "crown" – structure. Two other polymorphs are known, also with nearly identical molecular structures. In addition to , sulfur rings of 6, 7, 9–15, 18, and 20 atoms are known. At least five allotropes are uniquely formed at high pressures, two of which are metallic. The number of sulfur allotropes reflects the relatively strong S−S bond of 265 kJ/mol. Furthermore, unlike most elements, the allotropes of sulfur can be manipulated in solutions of organic solvents and are analysed by HPLC. Phase diagram The pressure-temperature (P-T) phase diagram for sulfur is complex (see image). The region labeled I (a solid region), is α-sulfur. High-pressure solid allotropes In a high-pressure study at ambient temperatures, four new solid forms, termed II, III, IV, V have been characterized, where α-sulfur is form I. Solid forms II and III are polymeric, while IV and V are metallic (and are superconductive below 10 K and 17 K, respectively). Laser irradiation of solid samples produces three sulfur forms below 200–300 kbar (20–30 GPa). Solid cyclo allotrope preparation Two methods exist for the preparation of the cyclo-sulfur allotropes. One of the methods, which is most famous for preparing hexasulfur, is to treat hydrogen polysulfides with polysulfur dichloride: A second strategy uses titanocene pentasulfide as a source of the unit. This complex is easily made from polysulfide solutions: Titanocene pentasulfide reacts with polysulfur chloride: Solid cyclo-sulfur allotropes Cyclo-hexasulfur, cyclo- This allotrope was first prepared by M. R. Engel in 1891 by treating thiosulfate with HCl. Cyclo- is orange-red and forms a rhombohedral crystal. It is called ρ-sulfur, ε-sulfur, Engel's sulfur and Aten's sulfur. Another method of preparation involves the reaction of a polysulfane with sulfur monochloride: (dilute solution in diethyl ether) The sulfur ring in cyclo- has a "chair" conformation, reminiscent of the chair form of cyclohexane. All of the sulfur atoms are equivalent. Cyclo-heptasulfur, cyclo- It is a bright yellow solid. Four (α-, β-, γ-, δ-) forms of cyclo-heptasulfur are known. Two forms (γ-, δ-) have been characterized. The cyclo- ring has an unusual range of bond lengths of 199.3–218.1 pm. It is said to be the least stable of all of the sulfur allotropes. Cyclo-octasulfur, cyclo- Octasulfur contains puckered rings, and is known in three forms that differ only in the way the rings are packed in the crystal. α-Sulfur α-Sulfur is the form most commonly found in nature. When pure it has a greenish-yellow colour (traces of cyclo- in commercially available samples make it appear yellower). It is practically insoluble in water and is a good electrical insulator with poor thermal conductivity. It is quite soluble in carbon disulfide: 35.5 g/100 g solvent at 25 °C. It has an orthorhombic crystal structure. α-Sulfur is the predominant form found in "flowers of sulfur", "roll sulfur" and "milk of sulfur". It contains puckered rings, alternatively called a crown shape. The S–S bond lengths are all 203.7 pm and the S-S-S angles are 107.8° with a dihedral angle of 98°. At 95.3 °C, α-sulfur converts to β-sulfur. β-Sulfur β-Sulfur is a yellow solid with a monoclinic crystal form and is less dense than α-sulfur. It is unusual because it is only stable above 95.3 °C; below this temperature it converts to α-sulfur. β-Sulfur can be prepared by crystallising at 100 °C and cooling rapidly to slow down formation of α-sulfur. It has a melting point variously quoted as 119.6 °C and 119.8 °C but as it decomposes to other forms at around this temperature the observed melting point can vary. The 119 °C melting point has been termed the "ideal melting point" and the typical lower value (114.5 °C) when decomposition occurs, the "natural melting point". γ-Sulfur γ-Sulfur was first prepared by F.W. Muthmann in 1890. It is sometimes called "nacreous sulfur" or "mother of pearl sulfur" because of its appearance. It crystallises in pale yellow monoclinic needles. It is the densest form of the three. It can be prepared by slowly cooling molten sulfur that has been heated above 150 °C or by chilling solutions of sulfur in carbon disulfide, ethyl alcohol or hydrocarbons. It is found in nature as the mineral rosickyite. It has been tested in carbon fiber-stabilized form as a cathode in lithium-sulfur (Li-S) batteries and was observed to stop the formation of polysulfides that compromise battery life. Cyclo- (n = 9–15, 18, 20) These allotropes have been synthesised by various methods for example, treating titanocene pentasulfide and a dichlorosulfane of suitable sulfur chain length, : or alternatively treating a dichlorosulfane, and a polysulfane, : , , and can also be prepared from . With the exception of cyclo-, the rings contain S–S bond lengths and S-S-S bond angle that differ one from another. Cyclo- is the most stable cyclo-allotrope. Its structure can be visualised as having sulfur atoms in three parallel planes, 3 in the top, 6 in the middle and three in the bottom. Two forms (α-, β-) of cyclo- are known, one of which has been characterized. Two forms of cyclo- are known where the conformation of the ring is different. To differentiate these structures, rather than using the normal crystallographic convention of α-, β-, etc., which in other cyclo- compounds refer to different packings of essentially the same conformer, these two conformers have been termed endo- and exo-. Cyclo-·cyclo- adduct This adduct is produced from a solution of cyclo- and cyclo- in . It has a density midway between cyclo- and cyclo-. The crystal consists of alternate layers of cyclo- and cyclo-. This material is a rare example of an allotrope that contains molecules of different sizes. Catena sulfur forms The term "Catena sulfur forms" refers to mixtures of sulfur allotropes that are high in catena (polymer chain) sulfur. The naming of the different forms is very confusing and care has to be taken to determine what is being described because some names are used interchangeably. Amorphous sulfur Amorphous sulfur is the quenched product from molten sulfur hotter than the λ-transition at 160 °C, where polymerization yields catena sulfur molecules. (Above this temperature, the properties of the liquid melt change remarkably. For example, the viscosity increases more than 10000-fold as the temperature increases through the transition). As it anneals, solid amorphous sulfur changes from its initial glassy form, to a plastic form, hence its other names of plastic, and glassy or vitreous sulfur. The plastic form is also called χ-sulfur. Amorphous sulfur contains a complex mixture of catena-sulfur forms mixed with cyclo-forms. Insoluble sulfur Insoluble sulfur is obtained by washing quenched liquid sulfur with . It is sometimes called polymeric sulfur, μ-S or ω-S. Fibrous (φ-) sulfur Fibrous (φ-) sulfur is a mixture of the allotropic ψ- form and γ-cyclo-. ω-Sulfur ω-Sulfur is a commercially available product prepared from amorphous sulfur that has not been stretched prior to extraction of soluble forms with . It sometimes called "white sulfur of Das" or supersublimated sulfur. It is a mixture of ψ-sulfur and lamina sulfur. The composition depends on the exact method of production and the sample's history. One well known commercial form is "Crystex". ω-sulfur is used in the vulcanization of rubber. λ-Sulfur λ-Sulfur is molten sulfur just above the melting temperature. It is a mixture containing mostly cyclo-. Cooling λ-sulfur slowly gives predominantly β-sulfur. μ-Sulfur μ-Sulfur is the name applied to solid insoluble sulfur and the melt prior to quenching. π-Sulfur π-Sulfur is a dark-coloured liquid formed when λ-sulfur is left to stay molten. It contains mixture of rings. Biradical catena () chains This term is applied to biradical catena-chains in sulfur melts or the chains in the solid. Solid catena allotropes The production of pure forms of catena-sulfur has proved to be extremely difficult. Complicating factors include the purity of the starting material and the thermal history of the sample. ψ-Sulfur This form, also called fibrous sulfur or ω1-sulfur, has been well characterized. It has a density of 2.01 g·cm−3 (α-sulfur 2.069 g·cm−3) and decomposes around its melting point of 104 °C. It consists of parallel helical sulfur chains. These chains have both left and right-handed "twists" and a radius of 95 pm. The S–S bond length is 206.6 pm, the S-S-S bond angle is 106° and the dihedral angle is 85.3°, (comparable figures for α-sulfur are 203.7 pm, 107.8° and 98.3°). Lamina sulfur Lamina sulfur has not been well characterized but is believed to consist of criss-crossed helices. It is also called χ-sulfur or ω2-sulfur. High-temperature gaseous allotropes Monatomic sulfur can be produced from photolysis of carbonyl sulfide. Disulfur, Disulfur, , is the predominant species in sulfur vapour above 720 °C (a temperature above that shown in the phase diagram); at low pressure (1 mmHg) at 530 °C, it comprises 99% of the vapor. It is a triplet diradical (like dioxygen and sulfur monoxide), with an S−S bond length of 188.7 pm. The blue colour of burning sulfur is due to the emission of light by the molecule produced in the flame. The molecule has been trapped in the compound (E = As, Sb) for crystallographic measurements, produced by treating elemental sulfur with excess iodine in liquid sulfur dioxide. The cation has an "open-book" structure, in which each ion donates the unpaired electron in the π* molecular orbital to a vacant orbital of the molecule. Trisulfur, is found in sulfur vapour, comprising 10% of vapour species at 440 °C and 10 mmHg. It is cherry red in colour, with a bent structure, similar to ozone, . Tetrasulfur, has been detected in the vapour phase, but it has not been well characterized. Diverse structures (e.g. chains, branched chains and rings) have been proposed. Theoretical calculations suggest a cyclic structure. Pentasulfur, Pentasulfur has been detected in sulfur vapours but has not been isolated in pure form. List of allotropes and forms Allotropes are in Bold. References Bibliography External links Amorphous solids
Allotropes of sulfur
[ "Physics", "Chemistry" ]
2,769
[ "Amorphous solids", "Allotropes of sulfur", "Unsolved problems in physics", "Allotropes" ]
14,385,549
https://en.wikipedia.org/wiki/Strongly%20minimal%20theory
In model theory—a branch of mathematical logic—a minimal structure is an infinite one-sorted structure such that every subset of its domain that is definable with parameters is either finite or cofinite. A strongly minimal theory is a complete theory all models of which are minimal. A strongly minimal structure is a structure whose theory is strongly minimal. Thus a structure is minimal only if the parametrically definable subsets of its domain cannot be avoided, because they are already parametrically definable in the pure language of equality. Strong minimality was one of the early notions in the new field of classification theory and stability theory that was opened up by Morley's theorem on totally categorical structures. The nontrivial standard examples of strongly minimal theories are the one-sorted theories of infinite-dimensional vector spaces, and the theories ACFp of algebraically closed fields of characteristic p. As the example ACFp shows, the parametrically definable subsets of the square of the domain of a minimal structure can be relatively complicated ("curves"). More generally, a subset of a structure that is defined as the set of realizations of a formula φ(x) is called a minimal set if every parametrically definable subset of it is either finite or cofinite. It is called a strongly minimal set if this is true even in all elementary extensions. A strongly minimal set, equipped with the closure operator given by algebraic closure in the model-theoretic sense, is an infinite matroid, or pregeometry. A model of a strongly minimal theory is determined up to isomorphism by its dimension as a matroid. Totally categorical theories are controlled by a strongly minimal set; this fact explains (and is used in the proof of) Morley's theorem. Boris Zilber conjectured that the only pregeometries that can arise from strongly minimal sets are those that arise in vector spaces, projective spaces, or algebraically closed fields. This conjecture was refuted by Ehud Hrushovski, who developed a method known as "Hrushovski construction" to build new strongly minimal structures from finite structures. See also C-minimal theory o-minimal theory References Model theory
Strongly minimal theory
[ "Mathematics" ]
455
[ "Mathematical logic", "Model theory" ]
14,391,787
https://en.wikipedia.org/wiki/Bayes%20linear%20statistics
Bayes linear statistics is a subjectivist statistical methodology and framework. Traditional subjective Bayesian analysis is based upon fully specified probability distributions, which are very difficult to specify at the necessary level of detail. Bayes linear analysis attempts to solve this problem by developing theory and practise for using partially specified probability models. Bayes linear in its current form has been primarily developed by Michael Goldstein. Mathematically and philosophically it extends Bruno de Finetti's Operational Subjective approach to probability and statistics. Motivation Consider first a traditional Bayesian Analysis where you expect to shortly know D and you would like to know more about some other observable B. In the traditional Bayesian approach it is required that every possible outcome is enumerated i.e. every possible outcome is the cross product of the partition of a set of B and D. If represented on a computer where B requires n bits and D m bits then the number of states required is . The first step to such an analysis is to determine a person's subjective probabilities e.g. by asking about their betting behaviour for each of these outcomes. When we learn D conditional probabilities for B are determined by the application of Bayes' rule. Practitioners of subjective Bayesian statistics routinely analyse datasets where the size of this set is large enough that subjective probabilities cannot be meaningfully determined for every element of D × B. This is normally accomplished by assuming exchangeability and then the use of parameterized models with prior distributions over parameters and appealing to the de Finetti's theorem to justify that this produces valid operational subjective probabilities over D × B. The difficulty with such an approach is that the validity of the statistical analysis requires that the subjective probabilities are a good representation of an individual's beliefs however this method results in a very precise specification over D × B and it is often difficult to articulate what it would mean to adopt these belief specifications. In contrast to the traditional Bayesian paradigm Bayes linear statistics following de Finetti uses Prevision or subjective expectation as a primitive, probability is then defined as the expectation of an indicator variable. Instead of specifying a subjective probability for every element in the partition D × B the analyst specifies subjective expectations for just a few quantities that they are interested in or feel knowledgeable about. Then instead of conditioning an adjusted expectation is computed by a rule that is a generalization of Bayes' rule that is based upon expectation. The use of the word linear in the title refers to de Finetti's arguments that probability theory is a linear theory (de Finetti argued against the more common measure theory approach). Example In Bayes linear statistics, the probability model is only partially specified, and it is not possible to calculate conditional probability by Bayes' rule. Instead Bayes linear suggests the calculation of an Adjusted Expectation. To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e. In order to specify a Bayes linear model it is necessary to supply expectations for the vectors B and D, and to also specify the correlation between each component of B and each component of D. For example the expectations are specified as: and the covariance matrix is specified as : The repetition in this matrix, has some interesting implications to be discussed shortly. An adjusted expectation is a linear estimator of the form where and are chosen to minimise the prior expected loss for the observations i.e. in this case. That is for where are chosen in order to minimise the prior expected loss in estimating In general the adjusted expectation is calculated with Setting to minimise From a proof provided in (Goldstein and Wooff 2007) it can be shown that: For the case where is not invertible the Moore–Penrose pseudoinverse should be used instead. Furthermore, the adjusted variance of the variable after observing the data is given by See also Imprecise probability External links Bayes Linear Methods References Goldstein, M. (1981) Revising Previsions: a Geometric Interpretation (with Discussion). Journal of the Royal Statistical Society, Series B, 43(2), 105-130 Goldstein, M. (2006) Subjectivism principles and practice. Bayesian Analysis] Michael Goldstein, David Wooff (2007) Bayes Linear Statistics, Theory & Methods, Wiley. de Finetti, B. (1931) "Probabilism: A Critical Essay on the Theory of Probability and on the Value of Science," (translation of 1931 article) in Erkenntnis, volume 31, September 1989. The entire double issue is devoted to de Finetti's philosophy of probability. de Finetti, B. (1937) “La Prévision: ses lois logiques, ses sources subjectives,” Annales de l'Institut Henri Poincaré, - "Foresight: its Logical Laws, Its Subjective Sources," (translation of the 1937 article in French) in H. E. Kyburg and H. E. Smokler (eds), Studies in Subjective Probability, New York: Wiley, 1964. de Finetti, B. (1974) Theory of Probability, (translation by A Machi and AFM Smith of 1970 book) 2 volumes, New York: Wiley, 1974-5. Linear statistics Probability interpretations
Bayes linear statistics
[ "Mathematics" ]
1,133
[ "Probability interpretations" ]
14,393,992
https://en.wikipedia.org/wiki/F2%20propagation
F2 propagation (F2-skip) is the reflection of VHF signals off the F2 layer of the ionosphere. The phenomenon is rare compared to other forms of propagation (such as sporadic E propagation, or E-skip) but can reflect signals thousands of miles beyond their intended broadcast area, substantially farther than E-skip. F2-skip affects the upper ends of the high frequency (HF) spectrum and the low ends of the very high frequency (VHF) spectrum; only a small portion of F2's effective range overlaps frequencies used by consumer broadcast reception, also contributing to the phenomenon being rarely encountered. Theory Solar activity has a cycle of approximately 11 years. During this period, sunspot activity rises to a peak and gradually falls again to a low level. When sunspot activity increases, the reflecting capabilities of the F1 layer surrounding earth enable high frequency short-wave communications. The highest-reflecting layer, the F2 layer, which is approximately above earth, receives ultraviolet radiation from the sun, causing ionisation of the gases within this layer. During the daytime when sunspot activity is at a maximum, the F2 layer can become intensely ionized due to radiation from the sun. When solar activity is sufficiently high, the maximum usable frequency (MUF) increases, hence the ionisation density is sufficient to reflect signals well into the 30-60 MHz VHF spectrum. Since the MUF progressively increases, F2 reception on lower frequencies can indicate potential low band 45-55 MHz VHF TV as well as VHF amateur radio paths. A rising MUF will initially affect the 27 MHz CB band, and the amateur 28 MHz 10 meters band before reaching 45-55 MHz TV and the 6 meters amateur band. The F2 MUF generally increases at a slower rate compared to the Es MUF. Since the height of the F2 layer is some , it follows that single-hop F2 signals will be received at thousands rather than hundreds of miles. A single-hop F2 signal will usually be around minimum. A maximum F2 single-hop can reach up to approximately . Multi-hop F2 propagation has enabled Band 1 VHF reception to over . Since F2 reception is directly related to radiation from the Sun on both a daily basis and in relation to the sunspot cycle, it follows that for optimum reception the centre of the signal path will be roughly at midday. Outside a solar maximum it can still occur somewhat regularly within about 15 to 20 degrees from the geomagnetic equator, with the peak generally being in spring time. However, this type of F2 propagation is mostly specifically referred to as TEP (Trans Equatorial Propagation) to differentiate it from the less common mid latitude F2 propagation. The F2 layer tends to predominantly propagate signals below 30 MHz (HF) during a solar minimum, which includes the 27 MHz CB radio, and 28 MHz 10-meter amateur radio band. During a solar maximum, television, amateur radio signals, private land mobile, and other services in the 30-60 MHz VHF spectrum are also propagated over considerable distances. In North America, F2 is most likely to only affect VHF TV channel 2, in Europe and middle east channel E2 and E3 (and the now deprecated channel itA) and in eastern Europe channel R1. Television pictures propagated via F2 tend to suffer from characteristic ghosting and smearing, although they are mostly stronger and more stable than double hop Sporadic E signal. Picture degradation and signal strength attenuation increases with each subsequent F2 hop. Notable F2 DX receptions In November 1938, 405-line video from the BBC Alexandra Palace television station (London, England) on channel B1 (45.0 MHz) was received in New York, US. In 1958, the FM broadcast radio DX record was set by DXer Gordon Simkin in southern California, United States, when he logged a 45 MHz commercial FM station from Korea via trans-Pacific F2 propagation at a distance of . In October 1979, Anthony Mann (Perth, Western Australia) received 48.25 MHz audio and 51.75 MHz video from the Holme Moss BBC channel B2 television transmitter. This F2 reception is a world record for reception from a BBC 405-line channel B2 transmitter. During October to December 1979, United Kingdom DXers Roger Bunney (Hampshire), Hugh Cocks (Sussex), Mike Allmark (Leeds), and Ray Davies (Norwich) all received viewable television pictures from Australian channel TVQ 0 Brisbane (46.26 MHz) via multi-hop F2 propagation. On January 31, 1981, Todd Emslie, Sydney, Australia, received 41.5 MHz channel B1 television audio transmitted from Crystal Palace Transmitter by the BBC's television service, away. This BBC B1 reception was also recorded on to audio tape. He has also received Dubai's DCRTV 48.25 MHz video on November 23, 1991, in the same place. On February 8, 1992, emedxer from Perth, West Australia, received ARD E2 video from Grünten on 48.2604 MHz at a distance of 13,750 km away. On April 18, 2014, the DXer HughTVDX, received Canal 2 Posada (Misiones) - Argentina (55.251 MHz video) in southern Portugal about 8700 km away From late March until mid April 2023 Dante's Enigmatic World received various TV signals from the Philippines, such as AMBS ALLTV and TV5 A2 with video and audio (55.25 MHz video, 59.75 MHz audio) in Kyoto, Japan 3,200 km away. See also Tropospheric propagation Federal Standard 1037C MW DX Skywave Radio propagation Clear-channel station References External links TV/FM Antenna Locator Worldwide TV/FM DX Association Worldwide TV/FM DX Association Forums Band 1 TVDX from Europe, North African and Middle East FMDX database British FM & TV Circle, Home of FM & TV DX in the UK Girard Westerberg's page, including a live DX webcam Mike's TV and FM DX Page since 1999 Todd Emslie's TV FM DX Page Jeff Kadet's TV DX Page FM DX Italy The official FM & TV DX website in Italy fmdxITALY Home of FM & TV DX in Italy FMLIST is a non-commercial worldwide database of FM stations, including a bandscan and logbook tool (FMINFO/myFM) Mixture.fr AM/FM/DAB database for France MeteorComm Meteor Burst Technology used for Data Communication FMSCAN reception prediction of FM, TV, MW, SW stations (also use the expert options for better results) Herman Wijnants' FMDX pages TV/FM Skip Log qth.net Mailing Lists for Radio, Television, Amateur and other related information for Enthusiasts. North American TV Logo Gallery VHF DXing - From Fort Walton Beach, Florida Radio-info.com DX and Reception FM DX RDS LogBook Software Ionosphere Radio frequency propagation th:ทีวีดีเอกซ์
F2 propagation
[ "Physics" ]
1,461
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
9,245,403
https://en.wikipedia.org/wiki/Misuse%20detection
Misuse detection actively works against potential insider threats to vulnerable computer data. Misuse Misuse detection is an approach to detecting computer attacks. In a misuse detection approach, abnormal system behaviour is defined first, and then all other behaviour is defined as normal. It stands against the anomaly detection approach which utilizes the reverse: defining normal system behaviour first and defining all other behaviour as abnormal. With misuse detection, anything not known is normal. An example of misuse detection is the use of attack signatures in an intrusion detection system. Misuse detection has also been used more generally to refer to all kinds of computer misuse. References Further reading For more information on Misuse Detection, including papers written on the subject, consider the following: Misuse Detection Concepts and Algorithms - article by the IR Lab at IIT. Data security
Misuse detection
[ "Engineering" ]
166
[ "Cybersecurity engineering", "Data security" ]
9,248,456
https://en.wikipedia.org/wiki/Deuterated%20chloroform
Deuterated chloroform, also known as chloroform-d, is the organic compound with the formula . Deuterated chloroform is a common solvent used in NMR spectroscopy. The properties of (chloroform) are virtually identical. Deuterochloroform was first made in 1935 during the years of research on deuterium. Preparation Deuterated chloroform is commercially available. It is more easily produced and less expensive than deuterated dichloromethane. Deuterochloroform is produced by the reaction of hexachloroacetone with deuterium oxide, using pyridine as a catalyst. The large difference in boiling points between the starting material and product facilitate purification by distillation. Treating chloral with sodium deuteroxide (NaOD) gives deuterated chloroform. NMR solvent In proton NMR spectroscopy, deuterated solvent (enriched to >99% deuterium) is typically used to avoid recording a large interfering signal or signals from the proton(s) (i.e., hydrogen-1) present in the solvent itself. If nondeuterated chloroform (containing a full equivalent of protium) were used as solvent, the solvent signal would almost certainly overwhelm and obscure any nearby analyte signals. In addition, modern instruments usually require the presence of deuterated solvent, as the field frequency is locked using the deuterium signal of the solvent to prevent frequency drift. Commercial chloroform-d does, however, still contain a small amount (0.2% or less) of non-deuterated chloroform; this results in a small singlet at 7.26 ppm, known as the residual solvent peak, which is frequently used as an internal chemical shift reference. In carbon-13 NMR spectroscopy, the sole carbon in deuterated chloroform shows a triplet at a chemical shift of 77.16 ppm with the three peaks being about equal size, resulting from splitting by spin coupling to the attached spin-1 deuterium atom ( has a chemical shift of 77.36 ppm). Deuterated chloroform is a general purpose NMR solvent, as it is not very chemically reactive and unlikely to exchange its deuterium with its solute, and its low boiling point allows for easy sample recovery. It, however, it is incompatible with strongly basic, nucleophilic, or reducing analytes, including many organometallic compounds. Hazards Chloroform reacts photochemically with oxygen to form chlorine, phosgene and hydrogen chloride. To slow this process and reduce the acidity of the solvent, chloroform-d is stored in brown-tinted bottles, often over copper chips or silver foil as stabilizer. Instead of metals, a small amount of a neutralizing base like potassium carbonate may be added. It is less toxic to the liver and kidneys than due to the stronger bond as compared to the bond, making it somewhat less prone to form the destructive trichloromethyl radical (). References Deuterated solvents Organochlorides Trichloromethyl compounds Nuclear magnetic resonance
Deuterated chloroform
[ "Physics", "Chemistry" ]
691
[ "Deuterated solvents", "Nuclear magnetic resonance", "Nuclear physics" ]
9,250,002
https://en.wikipedia.org/wiki/Energy%20planning
Energy planning has a number of different meanings, but the most common meaning of the term is the process of developing long-range policies to help guide the future of a local, national, regional or even the global energy system. Energy planning is often conducted within governmental organizations but may also be carried out by large energy companies such as electric utilities or oil and gas producers. These oil and gas producers release greenhouse gas emissions. Energy planning may be carried out with input from different stakeholders drawn from government agencies, local utilities, academia and other interest groups. Since 1973, energy modeling, on which energy planning is based, has developed significantly. Energy models can be classified into three groups: descriptive, normative, and futuristic forecasting. Energy planning is often conducted using integrated approaches that consider both the provision of energy supplies and the role of energy efficiency in reducing demands (Integrated Resource Planning). Energy planning should always reflect the outcomes of population growth and economic development. There are also several alternative energy solutions which avoid the release of greenhouse gasses, like electrifying current machines and using nuclear energy. A unused energy plan for cities is created as a result of a careful investigation of the arranging prepare, which coordinating city arranging and vitality arranging together and gives energy arrangements for high-level cities and mechanical parks. Planning and market concepts Energy planning has traditionally played a strong role in setting the framework for regulations in the energy sector (for example, influencing what type of power plants might be built or what prices were charged for fuels). But in the past two decades many countries have deregulated their energy systems so that the role of energy planning has been reduced, and decisions have increasingly been left to the market. This has arguably led to increased competition in the energy sector, although there is little evidence that this has translated into lower energy prices for consumers. Indeed, in some cases, deregulation has led to significant concentrations of "market power" with large very profitable companies having a large influence as price setters. Integrated resource planning Approaches to energy planning depends on the planning agent and the scope of the exercise. Several catch-phrases are associated with energy planning. Basic to all is resource planning, i.e. a view of the possible sources of energy in the future. A forking in methods is whether the planner considers the possibility of influencing the consumption (demand) for energy. The 1970s energy crisis ended a period of relatively stable energy prices and stable supply-demand relation. Concepts of demand side management, least cost planning and integrated resource planning (IRP) emerged with new emphasis on the need to reduce energy demand by new technologies or simple energy saving. Sustainable energy planning Further global integration of energy supply systems and local and global environmental limits amplifies the scope of planning both in subject and time perspective. Sustainable energy planning should consider environmental impacts of energy consumption and production, particularly in light of the threat of global climate change, which is caused largely by emissions of greenhouse gases from the world's energy systems, which is a long-term process. The 2022 renewable energy industry outlook shows supportive policies from an administration focused on combatting climate change in 2022's political landscape aid an expected growth of the renewable energy industry Biden has argued in favor of developing the clean energy industry in the US and in the world to vigorously address climate change. President Biden expressed his intention to move away from the oil industry. 2022 administration calls for, "Plan for Climate Change and Environmental Justice", which aims to reach 100% carbon-free power generation by 2035 and net-zero emissions by 2050 in the USA. Many OECD countries and some U.S. states are now moving to more closely regulate their energy systems. For example, many countries and states have been adopting targets for emissions of CO2 and other greenhouse gases. In light of these developments, broad scope integrated energy planning could become increasingly important Sustainable Energy Planning takes a more holistic approach to the problem of planning for future energy needs. It is based on a structured decision making process based on six key steps, namely: Exploration of the context of the current and future situation Formulation of particular problems and opportunities which need to be addressed as part of the Sustainable Energy Planning process. This could include such issues as "peak oil" or "economic recession/depression", as well as the development of energy demand technologies. Create a range of models to predict the likely impact of different scenarios. This traditionally would consist of mathematical modelling but is evolving to include "Soft System Methodologies" such as focus groups, peer ethnographic research, "what if" logical scenarios etc. Based on the output from a wide range of modelling exercises and literature reviews, open forum discussion etc., the results are analysed and structured in an easily interpreted format. The results are then interpreted to determine the scope, scale and likely implementation methodologies which would be required to ensure successful implementation. This stage is a quality assurance process which actively interrogates each stage of the Sustainable Energy Planning process and checks if it has been carried out rigorously, without any bias and that it furthers the aims of sustainable development and does not act against them. The last stage of the process is to take action. This may consist of the development, publication and implementation of a range of policies, regulations, procedures or tasks which together will help to achieve the goals of the Sustainable Energy Plan. Designing for implementation is often carried out using "Logical Framework Analysis" which interrogates a proposed project and checks that it is completely logical, that it has no fatal errors and that appropriate contingency arrangements have been put in place to ensure that the complete project will not fail if a particular strand of the project fails. Sustainable energy planning is particularly appropriate for communities who want to develop their own energy security, while employing best available practice in their planning processes. Energy planning tools (software) Energy planning can be conducted on different software platforms and over various timespans and with different qualities of resolution (i.e very short divisions of time/space or very large divisions). There are multiple platforms available for all sorts of energy planning analysis, with focuses on different areas, and significant growth in terms of modeling software or platforms available in recent years. Energy planning tools can be identified as commercial, open source, educational, free, and as used by governments (often custom tools). Potential energy solutions Electrification One potential energy option is the move to electrify all machines that currently use fossil fuels for their energy source. There are already electric alternatives available such as electric cars, electric cooktops, and electric heat pumps, now these products need to be widely implemented to electrify and decarbonize our energy use. To reduce our dependence on fossil fuels and transfer to electric machines, it requires that all electricity be generated by renewable sources. As of 2020 60.3% of all energy generated in the United States came from fossil fuels, 19.7% came from nuclear energy, and 19.8% came from renewables. The United States is still heavily relying on fossil fuels as a source of energy. For the electrification of our machines to help the efforts to decarbonize, more renewable energy sources, such as wind and solar would have to be built. Another potential problem that comes with the use of renewable energy is the energy transmission. A study conducted by Princeton University found that the locations with the highest renewable potential are in the Midwest, however, the places with the highest energy demand are coastal cities. To effectively make use of the electricity coming from these renewable sources, the U.S. electric grid would have to be nationalized, and more high voltage transmission lines would have to be built. The total amount of electricity that the grid would have to be able to accommodate has to increase. If more electric cars were being driven there would be a decline in gasoline demand and an increased demand for electricity, this increased demand for electricity would require our electric grids to be able to transport more energy at any given moment than is currently viable. Nuclear Energy Nuclear energy is sometimes considered to be a clean energy source. Nuclear energy's only associated carbon emission takes place during the process of mining for uranium, but the process of obtaining energy from uranium does not emit any carbon. A primary concern in using nuclear energy stems from the issue of what to do with radioactive waste. The highest level source of radioactive waste comes from the spent reactor fuel, the radioactive fuel decreases over time through radioactive decay. The time it takes for the radioactive waste to decay depends on the length of the substance's half-life. Currently, the United States does not have a permanent disposal facility for high-level nuclear waste. Public support behind increasing nuclear energy production is an important consideration when planning for sustainable energy. Nuclear energy production has a complicated past. Multiple nuclear power plants having accidents or meltdowns has tainted the reputation of nuclear energy for many. A considerable section of the public is concerned about the health and environmental impacts of a nuclear power plant melting down, believing that the risk is not worth the reward. Though there is a portion of the population that believes expanding nuclear energy is necessary and that the threats of climate change far outweigh the possibility of a meltdown, especially considering the advancements in technology that have been made within recent decades. Global greenhouse gas emissions and energy production The majority of global manmade greenhouse gas emissions is derived from the energy sector, contributing to 72.0% of global emissions. The majority of that energy goes toward producing electricity and heat (31.0%), the next largest contributor is agriculture (11%), followed by transportation (15%), forestry (6%) and manufacturing (12%). There are multiple different molecular compounds that fall under the classification of green house gases including, carbon dioxide, methane, and nitrous oxide. Carbon dioxide is the largest emitted greenhouse gas, making up 76% of global emission. Methane is the second largest emitted greenhouse gas at 16%, methane is primarily emitted from the agriculture industry. Lastly nitrous oxide makes up 6% of global emitted greenhouse gases, agriculture and industry are the largest emitters of nitrous oxide. The challenges in the energy sector include the reliance on coal. Coal production remains key to the energy mix and global imports rely on coal to meet the growing demand for gas Energy planning evaluates the current energy situation and estimates future changes based on industrialization patterns and resource availability. Many of the future changes and solutions depend on the global effort to move away from coal and begin making energy efficient technology and continue to electrify the world. See also References External links An online community for energy planners working on energy for sustainable development. A masters education on Energy planning at Aalborg University in Denmark. Energy development Energy policy Climate change policy
Energy planning
[ "Environmental_science" ]
2,176
[ "Environmental social science", "Energy policy" ]
9,250,314
https://en.wikipedia.org/wiki/Selective%20adsorption
In surface science, selective adsorption is the effect when minima associated with bound-state resonances occur in specular intensity in atom-surface scattering. In crystal growth, selective adsorption refers to the phenomenon where adsorbing molecules attach preferentially to certain crystal faces. An example of selective adsorption can be demonstrated in the growth of Rochelle salt crystals. If copper ions are added to solution during the growth process, some crystal faces will slow down as copper apparently becomes a barrier to adsorption. However, by then adding sodium hydroxide to the solution, the preferred crystal faces will change once again. Discovery Pronounced intensity minima were first observed in 1930 by Theodor Estermann, Otto Frisch, and Otto Stern, during a series of gas-surface interaction experiments attempting to demonstrate the wave nature of atoms and molecules. The phenomenon has been explained in 1936 by John Lennard-Jones and Devonshire in terms of resonant transitions to bound surface states. Significance The selective adsorption binding energies can supply information on the gas-surface interaction potentials by yielding the vibrational energy spectrum of the gas atom bound to the surface. Starting from the 1970s, it has been extensively studied, both theoretically and experimentally. Energy levels measured with this technique are available for many systems. References Surface science
Selective adsorption
[ "Physics", "Chemistry", "Materials_science" ]
267
[ "Physical chemistry stubs", "Condensed matter physics", "Surface science" ]
5,445,341
https://en.wikipedia.org/wiki/Tantalum%28IV%29%20sulfide
Tantalum(IV) sulfide is an inorganic compound with the formula TaS2. It is a layered compound with three-coordinate sulfide centres and trigonal prismatic or octahedral metal centres. It is structurally similar to molybdenum disulfide MoS2, and numerous other transition metal dichalcogenides. Tantalum disulfide has three polymorphs 1T-TaS2, 2H-TaS2, and 3R-TaS2, representing trigonal, hexagonal, and rhombohedral respectively. The properties of the 1T-TaS2 polytype have been described. CDW, the periodic distortion induced by the electron-phonon interaction, is manifested by formation of a superlattice constituted by clusters of 13 atoms, which is called the Star of David (SOD), where the surrounding 12 Ta atoms move slightly towards the centre of the star. there are three 1T-TaS2 charge density wave phases: commensurate charge density wave (CCDW), nearly commensurate charge density wave (NCCDW), and incommensurate charge density wave (ICCDW). In the CCDW phase, the entire material is covered with the superlattice, but in the ICCDW phase, the atoms do not move. NCCDW is the phase between the two as the SOD clusters are confined within the nearly hexagonal-shaped areas. The phase transition of 1T-TaS2 could be achieved via temperature difference, as it is one of the most investigated methods to achieve phase transition of the material. In common with many other transition metal dichalcogenide (TMD) compounds, which are metallic at high temperatures, it exhibits a series of charge-density-wave (CDW) phase transitions from 550 K to 50 K. It is unusual amongst them in showing a low-temperature insulating state below 200 K, which is believed to arise from electron correlations, similar to many oxides. The insulating state is commonly attributed to a Mott state. When cooling down to 550K, 1T-TaS2 transitions from metallic to ICCDW, then the material achieves NCCDW when cooling below 350K, and finally entering CCDW below 180K. However, if the temperature change is achieved by raising the temperature, another phase could appear between the CCDW phase and the NCCDW phase. The Triclinic Charge Density Wave (TCDW) is again the hybrid state between CCDW and ICCDW, the difference is that instead of forming an enclosed hexagon area, the material forms strips with different atom shifts. When 1T-TaS2 is heated at a lower temperature, the first transition is from CCDW to TCDW at 220K; Then, continue heating the material above 280K the phase of the material transits to NCCDW. It is also superconducting under pressure or upon doping, with a familiar dome-like phase diagram as a function of dopant, or substituted isovalent element concentration. Metastability. 1T-TaS2 is unique, not only amongst TMDs but also amongst 'quantum materials' in general, in showing a metastable metallic state at low temperatures. Switching from the insulating to the metallic state can be achieved either optically or by the application of electrical pulses. The metallic state is persistentbelow ~20K, but its lifetime can be tuned by changing the temperature. The metastable state lifetime can also be tuned by strain. The electrically-induced switching between states is of current interest, because it can be used for ultrafast energy-efficient memory devices. Because of the frustrated triangular arrangement of localized electrons, the material is suspected of supporting some form of quantum spin liquid state. It has been the subject of numerous studies as a host for intercalation of electron donors. Preparation TaS2 is prepared by reaction of powdered tantalum and sulfur at ~900 °C. It is purified and crystallized by chemical vapor transport using iodine as the transporting agent: TaS2 + 2 I2 TaI4 + 2 S It can be easily cleaved and has a characteristic golden sheen. Upon extended exposure to air, the formation of an oxide layer causes darkening of the surface. Thin films can be prepared by chemical vapour deposition and molecular beam epitaxy. Properties Three major crystalline phases are known for TaS2: trigonal 1T with one S-Ta-S sheet per unit cell, hexagonal 2H with two S-Ta-S sheets, and rhombohedral 3R with three S-Ta-S sheets per cell; 4H and 6R phases are also observed, but less frequently. These polymorphs mostly differ by the relative arrangement of the S-Ta-S sheet rather than the sheet structure. 2H-TaS2 is a superconductor with the bulk transition temperature TC = 0.5 K, which increases to 2.2 K in flakes with a thickness of a few atomic layers. The bulk TC value increases up to ~8 K at 10 GPa and then saturates with increasing pressure. In contrast, 1T-TaS2 starts superconducting only at ~2 GPa; as a function of pressure its TC quickly rises up to 5 K at ~4 GPa and then saturates. At ambient pressure and low temperatures 1T-TaS2 is a Mott insulator. Upon heating it changes to a Triclinic charge density wave (TCDW) state at TTCDW ~ 220 K, to a nearly commensurate charge density wave (NCCDW) state at TNCCDW ~ 280 K, to an incommensurate CDW (ICCDW) state at TICCDW ~ 350 K, and to a metallic state at TM ~ 600 K. In the CDW state the TaS2 lattice deforms to create a periodic Star of David pattern. Application of (e.g. 50fs) optical laser pulses or voltage pulses (~2–3 V) through electrodes or in a scanning tunneling microscope (STM) to the CDW state causes it to drop electrical resistance and creates a "mosaic" or domain state consisting of nanometer-sized domains, where both the domains and their walls exhibit metallic conductivity. This mosaic structure is metastable and gradually disappears upon heating. Memory devices and other potential applications Switching of the material to and from the "mosaic", or domain state, by optical or electrical pulses is used for "Charge configuration memory" (CCM) devices. The distinguishing feature of such devices is that they exhibit very efficient and fast non-thermal resistance switching at low temperatures. Room temperature operation of a charge-density-wave oscillator and thermally-driven GHz modulation of the CDW state has been demonstrated. References Disulfides Tantalum compounds Transition metal dichalcogenides Monolayers
Tantalum(IV) sulfide
[ "Physics" ]
1,466
[ "Monolayers", "Atoms", "Matter" ]
5,445,365
https://en.wikipedia.org/wiki/Tantalum%20trialuminide
Tantalum trialuminide (TaAl3) is an inorganic chemical compound. This compound and Ta3Al are stable, refractory, and reflective, and they have been proposed as coatings for use in infrared wave mirrors. References Aluminides Tantalum compounds
Tantalum trialuminide
[ "Chemistry" ]
60
[ "Intermetallics", "Inorganic compounds", "Aluminides", "Inorganic compound stubs" ]
5,445,371
https://en.wikipedia.org/wiki/Tantalum%28V%29%20bromide
Tantalum(V) bromide is the inorganic compound with the formula Ta2Br10. Its name comes from the compound's empirical formula, TaBr5. It is a diamagnetic, orange solid that hydrolyses readily. The compound adopts an edge-shared bioctahedral structure, which means that two TaBr5 units are joined by a pair of bromide bridges. There is no bond between the Ta centres. Niobium(V) chloride, niobium(V) bromide, niobium(V) iodide, tantalum(V) chloride, and tantalum(V) iodide all share this structural motif. Preparation and handling The material is usually prepared by the reaction of bromine with tantalum metal (or tantalum carbide) at elevated temperatures in a tube furnace. The bromides of the early metals are sometimes preferred to the chlorides because of the relative ease of handling liquid bromine vs gaseous chlorine. Like other molecular halides, it is soluble in nonpolar solvents such as carbon tetrachloride (1.465 g/100 mL at 30 °C), but it reacts with some solvents. It can also be produced from the more accessible oxide by metathesis using aluminium tribromide: Ta2O5 + 3.3 AlBr3 → 2 TaBr5 + 3.3 Al2O3 Carbothermal reduction of the oxide in the presence of bromine has also been employed, the byproduct being COBr2. References Bromides Tantalum(V) compounds Metal halides
Tantalum(V) bromide
[ "Chemistry" ]
343
[ "Bromides", "Inorganic compounds", "Metal halides", "Salts" ]
5,445,429
https://en.wikipedia.org/wiki/Terbium%28III%29%20iodide
Terbium(III) iodide (TbI3) is an inorganic chemical compound. Preparation Terbium(III) iodide can be produced by reacting terbium and iodine. Terbium iodide hydrate can be crystallized from solution by reacting hydriodic acid with terbium, terbium(III) oxide, terbium hydroxide or terbium carbonate: An alternative method is reacting terbium and mercury(II) iodide at 500 °C. Structure Terbium(III) iodide adopts the bismuth(III) iodide (BiI3) crystal structure type, with octahedral coordination of each Tb3+ ion by 6 iodide ions. References Iodides Terbium compounds Lanthanide halides
Terbium(III) iodide
[ "Chemistry" ]
172
[ "Inorganic compounds", "Inorganic compound stubs" ]
5,445,445
https://en.wikipedia.org/wiki/Terbium%28III%29%20oxide
Terbium(III) oxide, also known as terbium sesquioxide, is a sesquioxide of the rare earth metal terbium, having chemical formula . It is a p-type semiconductor, which conducts protons, which is enhanced when doped with calcium. It may be prepared by the reduction of in hydrogen at 1300 °C for 24 hours. It is a basic oxide and easily dissolved to dilute acids, and then almost colourless terbium salt is formed. Tb2O3 + 6 H+ → 2 Tb3+ + 3 H2O The crystal structure is cubic and the lattice constant is a = 1057 pm. References Terbium compounds Sesquioxides Semiconductor materials
Terbium(III) oxide
[ "Chemistry" ]
150
[ "Semiconductor materials", "Inorganic compounds", "Inorganic compound stubs" ]
5,447,381
https://en.wikipedia.org/wiki/Spin%20pumping
Spin pumping is the dynamical generation of pure spin current by the coherent precession of magnetic moments, which can efficiently inject spin from a magnetic material into an adjacent non-magnetic material. The non-magnetic material usually hosts the spin Hall effect that can convert the injected spin current into a charge voltage easy to detect. A spin pumping experiment typically requires electromagnetic irradiation to induce magnetic resonance, which converts energy and angular momenta from electromagnetic waves (usually microwaves) to magnetic dynamics and then to electrons, enabling the electronic detection of electromagnetic waves. The device operation of spin pumping can be regarded as the spintronic analog of a battery. Spin pumping involves an AC effect and a DC effect: The AC effect generates a spin current that oscillates at the same frequency with the microwave source. The DC effect requires that the magnetic dynamic is circularly polarized or elliptically polarized, whereas a linear oscillation can only generate an AC component. Both effects result in a net enhancement of the effective magnetic damping. Spin pumping in ferromagnets The spin current pumped into an adjacent layer by a precessing magnetic moment is given by where is the spin current (the vector indicates the orientation of the spin, not the direction of the current), is the spin-mixing conductance characterizing the spin transparency of the interface, is the saturation magnetization, and is the time-dependent orientation of the moment. Optical, microwave and electrical methods are also being explored. These devices could be used for low-power data transmission in spintronic devices or to transmit electrical signals through insulators. Spin pumping in antiferromagnets Spin pumping in antiferromagnetic materials does not vanish because the antiparallel magnetic moments contribute constructively rather than destructively to spin current, which was theoretically predicted in 2014. Since the frequency of antiferromagnetic resonance is much higher than that of ferromagnetic resonance, spin pumping in antiferromagnets can be utilized to study electromagnetic signals in the sub-terahertz and terahertz regime, which had been demonstrated by two independent experiments in 2020. Besides higher frequency, spin pumping in antiferromagnets features the chirality degree of freedom of magnetic dynamics that does not exist in ferromagnets. For example, the spin currents pumped by the left-handed and the right-handed resonance modes are opposite in direction. References See also Spintronics Spin wave Spin Hall effect Spintronics
Spin pumping
[ "Physics", "Materials_science" ]
512
[ "Spintronics", "Condensed matter physics" ]
5,450,517
https://en.wikipedia.org/wiki/Corrosion%20in%20space
Corrosion in space is the corrosion of materials occurring in outer space. Instead of moisture and oxygen acting as the primary corrosion causes, the materials exposed to outer space are subjected to vacuum, bombardment by ultraviolet and X-rays, solar energetic particles (mostly electrons and protons from solar wind), and electromagnetic radiation. In the upper layers of the atmosphere (between 90–800 km), the atmospheric atoms, ions, and free radicals, most notably atomic oxygen, play a major role. The concentration of atomic oxygen depends on altitude and solar activity, as the bursts of ultraviolet radiation cause photodissociation of molecular oxygen. Between 160 and 560 km, the atmosphere consists of about 90% atomic oxygen. Materials Corrosion in space has the highest impact on spacecraft with moving parts. Early satellites tended to develop problems with seizing bearings. Now the bearings are coated with a thin layer of gold. Different materials resist corrosion in space differently. Electrolytes in batteries or cooling loops can cause galvanic corrosion, general corrosion, and stress corrosion. Aluminium is slowly eroded by atomic oxygen, while gold and platinum are highly corrosion-resistant. Gold-coated foils and thin layers of gold on exposed surfaces are therefore used to protect the spacecraft from the harsh environment. Thin layers of silicon dioxide deposited on the surfaces can also protect metals from the effects of atomic oxygen; e.g., the Starshine 3 satellite aluminium front mirrors were protected that way. However, the protective layers are subject to erosion by micrometeorites. Silver builds up a layer of silver oxide, which tends to flake off and has no protective function; such gradual erosion of silver interconnects of solar cells was found to be the cause of some observed in-orbit failures. Many plastics are considerably sensitive to atomic oxygen and ionizing radiation. Coatings resistant to atomic oxygen are a common protection method, especially for plastics. Silicone-based paints and coatings are frequently employed, due to their excellent resistance to radiation and atomic oxygen. However, the silicone durability is somewhat limited, as the surface exposed to atomic oxygen is converted to silica which is brittle and tends to crack. Solving corrosion The process of space corrosion is being actively investigated. One of the efforts aims to design a sensor based on zinc oxide, able to measure the amount of atomic oxygen in the vicinity of the spacecraft; the sensor relies on drop of electrical conductivity of zinc oxide as it absorbs further oxygen. Other problems The outgassing of volatile silicones on low Earth orbit devices leads to presence of a cloud of contaminants around the spacecraft. Together with atomic oxygen bombardment, this may lead to gradual deposition of thin layers of carbon-containing silicon dioxide. Their poor transparency is a concern in case of optical systems and solar panels. Deposits of up to several micrometers were observed after 10 years of service on the solar panels of the Mir space station. Other sources of problems for structures subjected to outer space are erosion and redeposition of the materials by sputtering caused by fast atoms and micrometeoroids. Another major concern, though of non-corrosive kind, is material fatigue caused by cyclical heating and cooling and associated thermal expansion mechanical stresses. See also Space weathering References External links The Cosmos on a Shoestring: Small Spacecraft for Space and Earth Science, Appendix B: Failure in Spacecraft Systems PDF New Scientist premium article: Space is corrosive NASA Long Duration Exposition Facility: surface contamination in space Corrosion Spaceflight
Corrosion in space
[ "Chemistry", "Materials_science", "Astronomy" ]
705
[ "Outer space", "Metallurgy", "Corrosion", "Electrochemistry", "Materials degradation", "Spaceflight" ]
5,451,403
https://en.wikipedia.org/wiki/Tunnel%20hull
A tunnel hull is a type of boat hull that uses two typically planing hulls with a solid centre that traps air. This entrapment then creates aerodynamic lift in addition to the planing (hydrodynamic) lift from the hulls. Many times this is attributed to ground effect. Theoretical research and full-scale testing of tunnel hulls has demonstrated the dramatic contributions of 'close-proximity ground effect' on enhanced aerodynamic lift/drag in operation of performance tunnel hull designs. Tunnel hulls are distinguishable from other catamarans by the typical close hull spacing and solid deck in between the hulls. Formula 1 powerboats have a tunnel hull catamaran design allowing them to go faster. Tunnel hulls are a common design in offshore powerboat racing. References See also Cathedral hull Hickman sea sled Boston Whaler Supercavitation propeller Offshore Powerboat Racing Shipbuilding
Tunnel hull
[ "Engineering" ]
182
[ "Shipbuilding", "Marine engineering" ]
10,749,616
https://en.wikipedia.org/wiki/Lactarius%20rubrilacteus
Lactarius rubrilacteus is a species of mushroom of the genus Lactarius. It is also known as the bleeding milkcap, as is at least one other member of the genus, Lactarius sanguifluus. Description The mushroom can have either a bluish green or an orangy brown hue, with creamy white or yellow spores that are ellipsoid in shape. Greenish colors are more common to old, damaged or unexpanded specimens. The cap of the mushroom is convex and sometimes shield-shaped and across, reaching a height of tall. The cap also has quite an underfolded margin and a depressive disk. Lactarius rubrilacteus has many laticifers which appear as a white network across the surface of the mushroom. When sliced or cut, the mushroom flesh will typically release a dark red to purple latex or milky substance. The flesh itself will lose colour when damaged, and is usually granular or brittle to the touch. The stem is coloured as the cap, thin, and up to several centimetres long. The fungus itself exudes a slight odour that is faintly aromatic. This mushroom is edible but of little interest. Commonly found with a small blue or green mushroom attached at the base. Bruises green. Similar species Lactarius deliciosus is a related species, but its cap differs in appearance. L. sanguifluus is also similar. Distribution and habitat The mushroom is primarily found in parts of western North America, growing in forests and on the ground. The mushroom usually finds cover under conifer trees, mainly Douglas fir. It is widely distributed in these areas between the months of June and October. Chemical reactivity Potassium hydroxide: When the mushroom comes in contact with potassium hydroxide, most of the mushroom, including the mantle and ectomycorrhizae, loses its bluish hue and becomes a dull brown. Melzer's reagent: Hardly any visible reaction on any part of the mushroom occurs. This particular mushroom appears to have little reactivity to Melzer's Reagent. Sulfovanillin: Most of the mushroom becomes a reddish-brown color, but the oldest roots of the fungi stay unaltered by contact with sulfovanillin. See also List of Lactarius species References rubrilacteus Fungi described in 1979 Fungi of North America Edible fungi Taxa named by Alexander H. Smith Fungus species
Lactarius rubrilacteus
[ "Biology" ]
497
[ "Fungi", "Fungus species" ]
10,751,045
https://en.wikipedia.org/wiki/Novozymes
Novozymes A/S was a global biotechnology company headquartered in Bagsværd, outside of Copenhagen, Denmark. The company's focus was the research, development and production of industrial enzymes, microorganisms, and biopharmaceutical ingredients. The company merged with Chr. Hansen to form Novonesis in January 2024. Prior to the merger, the company had operations around the world, including in China, India, Brazil, Argentina, United Kingdom, the United States, and Canada. Class B shares of its stock were listed on the NASDAQ OMX Nordic exchange. History In 1925, the brothers Harald and Thorvald Pedersen founded Novo Terapeutisk Laboratorium and Nordisk Insulinlaboratorium with the aim to produce insulin. In 1941 the company's predecessor launched its first enzyme, trypsin, extracted from the pancreas of animals and used to soften leather, and was the first to produce enzymes by fermentation using bacteria in the 1950s. In the late 1980s Novozymes presented the world's first fat-splitting enzyme for detergents manufactured with genetically engineered microorganisms, called Lipolase. The current Novozymes was founded in 2000 as a spinout from pharmaceutical company Novo Nordisk. In the 2000s Novozymes expanded through the acquisition of several companies focusing on business outside the core enzyme business. Amongst them were the Brazilian bio agricultural company Turfal and German pharmaceutical, chemical and life science company EMD/Merck Crop BioScience Inc. These acquisitions made Novozymes a leader in sustainable solutions for the agricultural biological industry. In January 2016, the company spun out its biopharmaceutical operations into Albumedix. In June 2020, the business announced it would acquire Ireland-based PrecisionBiotics for $90 million. In December of the same year Novozymes announced it would acquire Microbiome Labs in a $125 million deal. In 12 December 2023, it was announced that Novozymes and Danish bioscience company Chr. Hansen had obtained regulatory approval for a merger, and on the following day, the name of the combined company was revealed as Novonesis. Ownership The Novozymes class A share capital is held by Novo Holdings A/S, a wholly owned subsidiary of the Novo Nordisk Foundation. In addition, Novo A/S holds 5,826,280 B shares, which overall gives Novo A/S 25.5% of the total share capital and 70.1% of the votes. References External links Forbes Magazine: "100 Corporations That Will Survive 100 Years" (January 28, 2009) Companies listed on Nasdaq Copenhagen Biotechnology companies of Denmark Life science companies based in Copenhagen Companies based in Gladsaxe Municipality Pharmaceutical companies established in 1925 Danish companies established in 2000 Danish brands Biotechnology companies established in 1925 Yeast banks Companies in the OMX Copenhagen 25 Companies in the S&P Europe 350 Dividend Aristocrats
Novozymes
[ "Biology" ]
604
[ "Life sciences industry", "Life science companies based in Copenhagen" ]
10,757,955
https://en.wikipedia.org/wiki/Mobile%20Internet%20device
A mobile Internet device (MID) is a multimedia capable mobile device providing wireless Internet access. They are designed to provide entertainment, information and location-based services for personal or business use. They allow 2-way communication and real-time sharing. They have been described as filling a niche between smartphones and tablet computers. As all the features of MID started becoming available on smartphones and tablets, the term is now mostly used to refer to both low-end as well as high-end tablets. Archos Internet tablets The form factor of mobile Internet tablets from Archos is very similar to the Lenovo image on the right. The class has included multiple operating systems: Windows CE, Windows 7 and Android. The Android tablet uses an ARM Cortex CPU and a touchscreen. Intel Mobile Internet Device (MID) platform Intel announced a prototype MID at the Intel Developer Forum in Spring 2007 in Beijing. A MID development kit by Sophia Systems using Intel Centrino Atom was announced in April 2008. Intel MID platforms are based on an Intel processor and chipset which consume less power than most of the x86 derivatives. A few platforms have been announced as listed below: McCaslin platform (2007) Intel's first generation MID platform (codenamed McCaslin) contains a 90 nm Intel A100/A110 processor (codenamed Stealey) which runs at 600–800 MHz. Menlow platform (2008) On 2 March 2008, Intel introduced the Intel Atom processor brand for a new family of low-power processor platforms. The components have thin, small designs and work together to "enable the best mobile computing and Internet experience" on mobile and low-power devices. Intel's second generation MID platform (codenamed Menlow) contains a 45 nm Intel Atom processor (codenamed Silverthorne) which can run up to 2.0 GHz and a System Controller Hub (codenamed Poulsbo) which includes Intel HD Audio (codenamed Azalia). This platform was initially branded as Centrino Atom but such practice was discontinued in Q3 2008. Moorestown platform (2010) Intel's third generation MID/smartphone platform (codenamed Moorestown) contains a 45 nm Intel Atom processor (codenamed Lincroft ) and a separate 65 nm Platform Controller Hub (codenamed Langwell). Since the memory controller and graphics controller are all now integrated into the processor, the northbridge has been removed and the processor communicates directly with the southbridge via the DMI bus interface. Medfield platform (2012) Intel's fourth generation MID/smartphone platform (codenamed Medfield) contains their first complete Intel Atom SoC (codenamed Penwell), produced on 32 nm. Clover Trail+ platform (2012) Intel's MID/smartphone platform (codenamed Clover Trail+) based on its Clover Trail tablet platform. It contains a 32 nm Intel Atom SoC (codenamed Cloverview). Merrifield platform (2013) Intel's fifth generation MID/smartphone platform (codenamed Merrifield ) contains a 22 nm Intel Atom SoC (codenamed Tangier). Moorefield platform (2014) Intel's sixth generation MID/smartphone platform (codenamed Moorefield) contains a 22 nm Intel Atom SoC (codenamed Anniedale). Morganfield platform Intel's seventh generation MID/smartphone platform (codenamed Morganfield) contains a 14 nm Intel Atom SoC (codenamed Broxton). Operating system Intel announced collaboration with Ubuntu to create Ubuntu for mobile internet devices distribution, known as Ubuntu Mobile. Ubuntu's website said the new distribution "will provide a rich Internet experience for users of Intel’s 2008 Mobile Internet Device (MID) platform." Ubuntu Mobile ended active development in 2009. See also Centrino Phablet Android (operating system) CrunchPad Moblin project Netbook / smartbook Ubuntu Mobile Ultra-mobile PC (UMPC) WiMAX Mobile web References Mobile computers Classes of computers Mobile web
Mobile Internet device
[ "Technology" ]
832
[ "Mobile web", "Wireless networking", "Computer systems", "Computers", "Classes of computers" ]
10,759,380
https://en.wikipedia.org/wiki/GADGET
GADGET is free software for cosmological N-body/SPH simulations written by Volker Springel at the Max Planck Institute for Astrophysics. The name is an acronym of "GAlaxies with Dark matter and Gas intEracT". It is released under the GNU GPL. It can be used to study for example galaxy formation and dark matter. Description GADGET computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces) and represents fluids by means of smoothed-particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, both with or without periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self-gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range which is, in principle, unlimited. GADGET can therefore be used to address a wide array of astrophysically interesting problems, ranging from colliding and merging galaxies, to the formation of large-scale structure in the universe. With the inclusion of additional physical processes such as radiative cooling and heating, GADGET can also be used to study the dynamics of the gaseous intergalactic medium, or to address star formation and its regulation by feedback processes. History The first public version (GADGET-1, released in March 2000 was created as part of Volker's PhD project under the supervision of Simon White. Later, the code was continuously improved during postdocs of Volker Springel at the Center for Astrophysics Harvard & Smithsonian and the Max Planck Institute, in collaboration with Simon White and Lars Hernquist. The second public version (GADGET-2, released in May 2005 contains most of these improvements, except for the numerous physics modules developed for the code that go beyond gravity and ordinary gas-dynamics. The most important changes lie in a new time integration model, a new tree-code module, a new communication scheme for gravitational and SPH forces, a new domain decomposition strategy, a novel SPH formulation based on entropy as independent variable, and finally, in the addition of the TreePM functionality. See also Computational physics Millennium Run References External links GADGET homepage Free astronomy software Cosmological simulation
GADGET
[ "Physics" ]
497
[ "Cosmological simulation", "Computational physics" ]
10,761,967
https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20experiment
The Eötvös experiment was a physics experiment that measured the correlation between inertial mass and gravitational mass, demonstrating that the two were one and the same, something that had long been suspected but never demonstrated with the same accuracy. The earliest experiments were done by Isaac Newton (1642–1727) and improved upon by Friedrich Wilhelm Bessel (1784–1846). A much more accurate experiment using a torsion balance was carried out by Loránd Eötvös starting around 1885, with further improvements in a lengthy run between 1906 and 1909. Eötvös's team followed this with a series of similar but more accurate experiments, as well as experiments with different types of materials and in different locations around the Earth, all of which demonstrated the same equivalence in mass. In turn, these experiments led to the modern understanding of the equivalence principle encoded in general relativity, which states that the gravitational and inertial masses are the same. It is sufficient for the inertial mass to be proportional to the gravitational mass. Any multiplicative constant will be absorbed in the definition of the unit of force. Eötvös's original experiment Eötvös's original experimental device consisted of two masses on opposite ends of a rod, hung from a thin fiber. A mirror attached to the rod, or fiber, reflected light into a small telescope. Even tiny changes in the rotation of the rod would cause the light beam to be deflected, which would in turn cause a noticeable change when magnified by the telescope. As seen from the Earth's frame of reference (or "lab frame", which is not an inertial frame of reference), the primary forces acting on the balanced masses are the string tension, gravity, and the centrifugal force due to the rotation of the Earth. Gravity is calculated by Newton's law of universal gravitation, which depends on gravitational mass. The centrifugal force is calculated by Newton's laws of motion and depends on inertial mass. The experiment was arranged so that if the two types of masses were different, the two forces will not act in exactly the same way on the two bodies, and over time the rod will rotate. As seen from the rotating "lab frame", the string tension plus the (much smaller) centrifugal force cancels the weight (as vectors), while as seen from any inertial frame the (vector) sum of the weight and the tension makes the object rotate along with the earth. For the rod to be at rest in the lab frame, the reactions, on the rod, of the tensions acting on each body, must create a zero net torque (the only degree of freedom is rotation on the horizontal plane). Supposing that the system was constantly at rest – this meaning mechanical equilibrium (i.e. net forces and torques zero) – with the two bodies thus hanging also at rest, but having different centrifugal forces upon them and consequently exerting different torques on the rod through the reactions of the tensions, the rod then would spontaneously rotate, in contradiction with our assumption that the system is at rest. So the system cannot exist in this state; any difference between the centrifugal forces on the two bodies will set the rod in rotation. Further improvements Initial experiments around 1885 demonstrated that there was no apparent difference, and Eötvös improved the experiment to demonstrate this with more accuracy. In 1889 he used the device with different types of sample materials to see if there was any change in gravitational force due to materials. This experiment proved that no such change could be measured, to a claimed accuracy of 1 in 20 million. In 1890 he published these results, as well as a measurement of the mass of Gellért Hill in Budapest. The next year he started work on a modified version of the device, which he called the "horizontal variometer". This modified the basic layout slightly to place one of the two rest masses hanging from the end of the rod on a fiber of its own, as opposed to being attached directly to the end. This allowed it to measure torsion in two dimensions, and in turn, the local horizontal component of g. It was also much more accurate. Now generally referred to as the Eötvös balance, this device is commonly used today in prospecting by searching for local mass concentrations. Using the new device a series of experiments taking 4000 hours was carried out with Dezsö Pekár (1873–1953) and Jenő Fekete (1880–1943) starting in 1906. These were first presented at the 16th International Geodesic Conference in London in 1909, raising the accuracy to 1 in 100 million. Eötvös died in 1919, and the complete measurements were only published in 1922 by Pekár and Fekete. Related studies Eötvös also studied similar experiments being carried out by other teams on moving ships, which led to his development of the Eötvös effect to explain the small differences they measured. These were due to the additional accelerative forces due to the motion of the ships in relation to the Earth, an effect that was demonstrated on an additional run carried out on the Black Sea in 1908. In the 1930s a former student of Eötvös, János Renner (1889–1976), further improved the results to between 1 in 2 to 5 billion. Robert H. Dicke with P. G. Roll and R. Krotkov re-ran the experiment much later using improved apparatus and further improved the accuracy to 1 in 100 billion. They also made several observations about the original experiment which suggested that the claimed accuracy was somewhat suspect. Re-examining the data in light of these concerns led to an apparent very slight effect that appeared to suggest that the equivalence principle was not exact, and changed with different types of material. In the 1980s several new physics theories attempting to combine gravitation and quantum mechanics suggested that matter and anti-matter would be affected slightly differently by gravity. Combined with Dicke's claims there appeared to be a possibility that such a difference could be measured, this led to a new series of Eötvös-type experiments (as well as timed falls in evacuated columns) that eventually demonstrated no such effect. A side-effect of these experiments was a re-examination of the original Eötvös data, including detailed studies of the local stratigraphy, the physical layout of the Physics Institute (which Eötvös had personally designed), and even the weather and other effects. The experiment is therefore well recorded. Table of measurements over time Tests on the Equivalence principle See also Fifth force Inertial frame General relativity Foucault pendulum Eddington experiment Tests of general relativity References Physics experiments Gravimetry
Eötvös experiment
[ "Physics" ]
1,366
[ "Experimental physics", "Physics experiments" ]
8,612,907
https://en.wikipedia.org/wiki/Relative%20interior
In mathematics, the relative interior of a set is a refinement of the concept of the interior, which is often more useful when dealing with low-dimensional sets placed in higher-dimensional spaces. Formally, the relative interior of a set (denoted ) is defined as its interior within the affine hull of In other words, where is the affine hull of and is a ball of radius centered on . Any metric can be used for the construction of the ball; all metrics define the same set as the relative interior. A set is relatively open iff it is equal to its relative interior. Note that when is a closed subspace of the full vector space (always the case when the full vector space is finite dimensional) then being relatively closed is equivalent to being closed. For any convex set the relative interior is equivalently defined as where means that there exists some such that . Comparison to interior The interior of a point in an at least one-dimensional ambient space is empty, but its relative interior is the point itself. The interior of a line segment in an at least two-dimensional ambient space is empty, but its relative interior is the line segment without its endpoints. The interior of a disc in an at least three-dimensional ambient space is empty, but its relative interior is the same disc without its circular edge. Properties See also References Further reading Topology
Relative interior
[ "Physics", "Mathematics" ]
277
[ "Spacetime", "Topology", "Space", "Geometry" ]
8,614,012
https://en.wikipedia.org/wiki/Ultra%201
The Ultra 1 is a family of Sun Microsystems workstations based on the 64-bit UltraSPARC microprocessor. It was the first model in the Ultra series of Sun computers, which succeeded the SPARCstation series. It launched in November 1995 alongside the MP-capable Ultra 2 and shipped with Solaris 2.5. It is capable of running other operating systems such as Linux and BSD. Specifications The Ultra 1 was available in a variety of specifications. The Ultra 1 Creator3D 170E launched with a list price of - along with the Ultra 1 Model 140, and Ultra 1 Creator 170E. CPU Three different CPU speeds were available: 143 MHz (Model 140), 167 MHz (Model 170) and 200 MHz (Model 200). Models Model numbers with an E suffix (Sun service code A12, code-named Electron) had two instead of three SBus slots, and added a UPA slot to allow the use of an optional Creator framebuffer. In addition, the E models had Wide SCSI and Fast Ethernet interfaces, in place of the narrow SCSI and 10BASE-T Ethernet of the standard Ultra 1 (service code A11, code-named Neutron). Memory The Ultra 1 uses 200-pin 5V ECC 60 ns SIMMs in pairs, the same memory used in the SPARCstation 20. Similar Machines Similar Sun machines were the Netra i 1 servers which had the same chassis and the UltraServer 1/Ultra Enterprise 1 servers . See also Ultra series References External links Ultra 1 Series Reference Manual Ultra 1 Series Service Manual Ultra 1 Creator Series Reference Manual Ultra 1 Creator Series Service Manual Workstations Product Library Documentation Sun workstations SPARC microprocessor products
Ultra 1
[ "Technology" ]
352
[ "Computing stubs", "Computer hardware stubs" ]
972,005
https://en.wikipedia.org/wiki/Autofrettage
Autofrettage is a work-hardening process in which a pressure vessel (thick walled) is subjected to enormous pressure, causing internal portions of the part to yield plastically, resulting in internal compressive residual stresses once the pressure is released. The goal of autofrettage is to increase the pressure-carrying capacity of the final product. Inducing residual compressive stresses into materials can also increase their resistance to stress corrosion cracking; that is, non-mechanically assisted cracking that occurs when a material is placed in a corrosive environment in the presence of tensile stress. The technique is commonly used in manufacture of high-pressure pump cylinders, warship and gun barrels, and fuel injection systems for diesel engines. Due to work-hardening process it also enhances wear life of the barrel marginally. While autofrettage will induce some work hardening, that is not the primary mechanism of strengthening. The start point is a single steel tube of internal diameter slightly less than the desired calibre. The tube is subjected to internal pressure of sufficient magnitude to enlarge the bore and in the process the inner layers of the metal are stretched in tension beyond their elastic limit. This means that the inner layers have been stretched to a point where the steel is no longer able to return to its original shape once the internal pressure has been removed. Although the outer layers of the tube are also stretched, the degree of internal pressure applied during the process is such that they are not stretched beyond their elastic limit. The reason why this is possible is that the stress distribution through the walls of the tube is non-uniform. Its maximum value occurs in the metal adjacent to the source of pressure, decreasing markedly towards the outer layers of the tube. The strain is proportional to the stress applied within the elastic limit; therefore the expansion at the outer layers is less than at the bore. Because the outer layers remain elastic they attempt to return to their original shape; however, they are prevented from doing so completely by the new permanently stretched inner layers. The effect is that the inner layers of the metal are put under compression by the outer layers in much the same way as though an outer layer of metal had been shrunk on as with a built-up gun. This can be better understood by assuming thick walled tube as multilayer tube. The next step is to subject the compressively strained inner layers to a low-temperature treatment (LTT) which results in the elastic limit being raised to at least the autofrettage pressure employed in the first stage of the process. Finally, the elasticity of the barrel can be tested by applying internal pressure once more, but this time care is taken to ensure that the inner layers are not stretched beyond their new elastic limit. The end result is an inner surface of the gun barrel with a residual compressive stress able to counterbalance the tensile stress that would be induced when the gun is discharged. In addition the material has a higher tensile strength due to work hardening. Early in the history of artillery, people observed that, after firing a small number of rounds, the bore of a new gun slightly enlarges and hardens. Historically, the first type of autofrettage avant la lettre was mandrelling bronze gun barrels, invented and patented in 1869 by Samuel B. Dean of the South Boston Iron Company. But it found no use on the American continent and was copied without a license by Franz von Uchatius in mid-1870s. It found some use in several European countries lacking steel industry, but was quickly displaced by cast steel everywhere except Austro-Hungary, which stuck to the obsolete technology until WWI and therefore had their artillery handicapped. The problem of strengthening steel gun barrels using the same principle was tackled by French colonial artillery colonel Louis Frédéric Gustave Jacob, who suggested in 1907 to pressurize them hydraulically and coined the term "autofrettage". In 1913, Schneider-Creusot made a 14 cm L/50 naval gun by such a method and applied for a patent. However, implementing such a technique on an industrial scale required numerical methods to approximate the solutions of transcedental equations of plastic deformation, which were developed in France during WWI by math professor Maurice d'Ocagne and Schneider engineer Louis Potin. In modern practice, a slightly oversized die is pushed slowly through the barrel by a hydraulically driven ram. The amount of initial underbore and oversize of the die are calculated to strain the material around the bore past its elastic limit into plastic deformation. A residual compressive stress remains on the barrel's inner surface, even after final honing and rifling. The technique has been applied to the expansion of tubular components down hole in oil and gas wells. The method has been patented by the Norwegian oil service company, Meta, which uses it to connect concentric tubular components with sealing and strength properties outlined above. The term autofrettage is also used to describe a step in manufacturing of composite overwrapped pressure vessel (COPV) where the liner is expanded (by plastic deformation), inside the composite overwrap. See also Shot peening, which also induces compressive residual stresses Built-up gun, an older method for strengthening gun barrels References External links White Paper Autofrettage Metalworking Firearm construction
Autofrettage
[ "Engineering" ]
1,078
[ "Firearm construction", "Mechanical engineering" ]
972,312
https://en.wikipedia.org/wiki/Liquid%20air
Liquid air is air that has been cooled to very low temperatures (cryogenic temperatures), so that it has condensed into a pale blue mobile liquid. It is stored in specialized containers, such as vacuum flasks, to insulate it from room temperature. Liquid air can absorb heat rapidly and revert to its gaseous state. It is often used for condensing other substances into liquid and/or solidifying them, and as an industrial source of nitrogen, oxygen, argon, and other inert gases through a process called air separation (industrially referred to as air rectification.). Properties Liquid air has a density of approximately . The density of a given air sample varies depending on the composition of that sample (e.g. humidity & concentration). Since dry gaseous air contains approximately 78% nitrogen, 21% oxygen, and 1% argon, the density of liquid air at standard composition is calculated by the percentage of the components and their respective liquid densities (see liquid nitrogen and liquid oxygen). Although air contains trace amounts of carbon dioxide (about 0.03%), carbon dioxide solidifies from the gas phase without passing through the intermediate liquid phase, and hence will not be present in liquid air at pressures less than . The boiling point of air is , intermediate between the boiling points of liquid nitrogen and liquid oxygen. However, it can be difficult to keep at a stable temperature as the liquid boils, since the nitrogen will boil off first, leaving the mixture oxygen-rich and changing the boiling point. This may also occur in some circumstances due to the liquid air condensing oxygen out of the atmosphere. Liquid air starts to freeze at approximately , precipitating nitrogen-rich solid (but with appreciable amount of oxygen in solid solution). Unless the oxygen is previously accommodated in the solid solution, the eutectic freezes at 50 K. Preparation Principle of production The constituents of air were once known as "permanent gases", as they could not be liquified solely by compression at room temperature. A compression process will raise the temperature of the gas. This heat is removed by cooling to the ambient temperature in a heat exchanger, and then expanding by venting into a chamber. The expansion causes a lowering of the temperature, and by counter-flow heat exchange of the expanded air, the pressurized air entering the expander is further cooled. With sufficient compression, flow, and heat removal, eventually droplets of liquid air will form, which may then be employed directly for low temperature demonstrations. The main constituents of air were liquefied for the first time by Polish scientists Karol Olszewski and Zygmunt Wróblewski in 1883. Devices for the production of liquid air are not commercially available, and not easily fabricated. Process of production The most common process for the preparation of liquid air is the two-column Hampson–Linde cycle using the Joule–Thomson effect. Air is fed at high pressure (>) into the lower column, in which it is separated into pure nitrogen and oxygen-rich liquid. The rich liquid and some of the nitrogen are fed as reflux into the upper column, which operates at low pressure (<), where the final separation into pure nitrogen and oxygen occurs. A raw argon product can be removed from the middle of the upper column for further purification. Air can also be liquefied by Claude's process, which combines cooling by Joule–Thomson effect, isentropic expansion and regenerative cooling. Application In manufacturing processes, the liquid air product is typically fractionated into its constituent gases in either liquid or gaseous form, as the oxygen is especially useful for fuel gas welding and cutting and for medical use, and the argon is useful as an oxygen-excluding shielding gas in gas tungsten arc welding. Liquid nitrogen is useful in various low-temperature applications, being nonreactive at normal temperatures (unlike oxygen), and boiling at . Transport and energy storage Between 1899 and 1902, the automobile Liquid Air was produced and demonstrated by a joint American/English company, with the claim that they could construct a car that would run a hundred miles on liquid air. On 2 October 2012, the Institution of Mechanical Engineers said liquid air could be used as a means of storing energy. This was based on a technology that was developed by Peter Dearman, a garage inventor in Hertfordshire, England to power vehicles. See also Liquid nitrogen Liquid oxygen Cryogenic energy storage Industrial gas Liquefaction of gases Liquid nitrogen vehicle References External links 2013-05-20 MIT Technology Review article on liquid air developments for transportation and grid energy storage Atmosphere Coolants Cryogenics Energy storage Energy technology Engineering thermodynamics Industrial gases Industrial processes Phases of matter
Liquid air
[ "Physics", "Chemistry", "Engineering" ]
966
[ "Applied and interdisciplinary physics", "Engineering thermodynamics", "Phases of matter", "Cryogenics", "Industrial gases", "Thermodynamics", "Mechanical engineering", "Chemical process engineering", "Matter" ]
972,328
https://en.wikipedia.org/wiki/Wedderburn%E2%80%93Etherington%20number
In mathematics and computer science, the Wedderburn–Etherington numbers are an integer sequence named after Ivor Malcolm Haddon Etherington and Joseph Wedderburn that can be used to count certain kinds of binary trees. The first few numbers in the sequence are 0, 1, 1, 1, 2, 3, 6, 11, 23, 46, 98, 207, 451, 983, 2179, 4850, 10905, 24631, 56011, ... () Combinatorial interpretation These numbers can be used to solve several problems in combinatorial enumeration. The nth number in the sequence (starting with the number 0 for n = 0) counts The number of unordered rooted trees with n leaves in which all nodes including the root have either zero or exactly two children. These trees have been called Otter trees, after the work of Richard Otter on their combinatorial enumeration. They can also be interpreted as unlabeled and unranked dendrograms with the given number of leaves. The number of unordered rooted trees with n nodes in which the root has degree zero or one and all other nodes have at most two children. Trees in which the root has at most one child are called planted trees, and the additional condition that the other nodes have at most two children defines the weakly binary trees. In chemical graph theory, these trees can be interpreted as isomers of polyenes with a designated leaf atom chosen as the root. The number of different ways of organizing a single-elimination tournament for n players (with the player names left blank, prior to seeding players into the tournament). The pairings of such a tournament may be described by an Otter tree. The number of different results that could be generated by different ways of grouping the expression for a binary multiplication operation that is assumed to be commutative but neither associative nor idempotent. For instance can be grouped into binary multiplications in three ways, as , , or . This was the interpretation originally considered by both Etherington and Wedderburn. An Otter tree can be interpreted as a grouped expression in which each leaf node corresponds to one of the copies of and each non-leaf node corresponds to a multiplication operation. In the other direction, the set of all Otter trees, with a binary multiplication operation that combines two trees by making them the two subtrees of a new root node, can be interpreted as the free commutative magma on one generator (the tree with one node). In this algebraic structure, each grouping of has as its value one of the n-leaf Otter trees. Formula The Wedderburn–Etherington numbers may be calculated using the recurrence relation beginning with the base case . In terms of the interpretation of these numbers as counting rooted binary trees with n leaves, the summation in the recurrence counts the different ways of partitioning these leaves into two subsets, and of forming a subtree having each subset as its leaves. The formula for even values of n is slightly more complicated than the formula for odd values in order to avoid double counting trees with the same number of leaves in both subtrees. Growth rate The Wedderburn–Etherington numbers grow asymptotically as where B is the generating function of the numbers and ρ is its radius of convergence, approximately 0.4027 , and where the constant given by the part of the expression in the square root is approximately 0.3188 . Applications use the Wedderburn–Etherington numbers as part of a design for an encryption system containing a hidden backdoor. When an input to be encrypted by their system can be sufficiently compressed by Huffman coding, it is replaced by the compressed form together with additional information that leaks key data to the attacker. In this system, the shape of the Huffman coding tree is described as an Otter tree and encoded as a binary number in the interval from 0 to the Wedderburn–Etherington number for the number of symbols in the code. In this way, the encoding uses a very small number of bits, the base-2 logarithm of the Wedderburn–Etherington number. describe a similar encoding technique for rooted unordered binary trees, based on partitioning the trees into small subtrees and encoding each subtree as a number bounded by the Wedderburn–Etherington number for its size. Their scheme allows these trees to be encoded in a number of bits that is close to the information-theoretic lower bound (the base-2 logarithm of the Wedderburn–Etherington number) while still allowing constant-time navigation operations within the tree. use unordered binary trees, and the fact that the Wedderburn–Etherington numbers are significantly smaller than the numbers that count ordered binary trees, to significantly reduce the number of terms in a series representation of the solution to certain differential equations. See also Catalan number Cryptography Information theory References Further reading . Integer sequences Trees (graph theory) Graph enumeration
Wedderburn–Etherington number
[ "Mathematics" ]
1,034
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Graph enumeration", "Recreational mathematics", "Mathematical objects", "Graph theory", "Combinatorics", "Mathematical relations", "Numbers", "Number theory" ]
972,333
https://en.wikipedia.org/wiki/Highly%20totient%20number
A highly totient number is an integer that has more solutions to the equation , where is Euler's totient function, than any integer smaller than it. The first few highly totient numbers are 1, 2, 4, 8, 12, 24, 48, 72, 144, 240, 432, 480, 576, 720, 1152, 1440 , with 2, 3, 4, 5, 6, 10, 11, 17, 21, 31, 34, 37, 38, 49, 54, and 72 totient solutions respectively. The sequence of highly totient numbers is a subset of the sequence of smallest number with exactly solutions to . The totient of a number , with prime factorization , is the product: Thus, a highly totient number is a number that has more ways of being expressed as a product of this form than does any smaller number. The concept is somewhat analogous to that of highly composite numbers, and in the same way that 1 is the only odd highly composite number, it is also the only odd highly totient number (indeed, the only odd number to not be a nontotient). And just as there are infinitely many highly composite numbers, there are also infinitely many highly totient numbers, though the highly totient numbers get tougher to find the higher one goes, since calculating the totient function involves factorization into primes, something that becomes extremely difficult as the numbers get larger. Example There are five numbers (15, 16, 20, 24, and 30) whose totient number is 8. No positive integer smaller than 8 has as many such numbers, so 8 is highly totient. Table See also Highly cototient number References Integer sequences
Highly totient number
[ "Mathematics" ]
365
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]