source
stringlengths 31
81
| text
stringlengths 93
139k
|
|---|---|
https://en.wikipedia.org/wiki/Albedo
|
Albedo (; ) is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation).
Surface albedo is defined as the ratio of radiosity Je to the irradiance Ee (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth's surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun). While bi-hemispherical reflectance is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages.
Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4–0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation).
Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change.
Albedo is an important concept in climatology, astronomy, and environmental management. The average albedo of the Earth from the upper atmosphere, its planetary albedo, is 30–35% because of cloud cover, but widely varies locally across the surface because of different geological and environmental features.
Terrestrial albedo
Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds.
Earth's surface albedo is regularly estimated via Earth observation satellite sensors such as NASA's MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo.
Earth's average surface temperature due to its albedo and the greenhouse effect is currently about . If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below . If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about . In contrast, if the entire Earth was covered by water – a so-called ocean planet – the average temperature on the planet would rise to almost .
In 2021, scientists reported that Earth dimmed by ~0.5% over two decades (1998–2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend.
White-sky, black-sky, and blue-sky albedo
For land surfaces, it has been shown that the albedo at a particular solar zenith angle θi can be approximated by the proportionate sum of two terms:
the directional-hemispherical reflectance at that solar zenith angle, , sometimes referred to as black-sky albedo, and
the bi-hemispherical reflectance, , sometimes referred to as white-sky albedo.
with being the proportion of direct radiation from a given solar angle, and being the proportion of diffuse illumination, the actual albedo (also called blue-sky albedo) can then be given as:
This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface.
Human activities
Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. As per Campra et al., human impacts to "the physical properties of the land surface can perturb the climate by altering the Earth’s radiative energy balance" even on a small scale or when undetected by satellites.
The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that "CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation" and can reduce surface temperature increases associated with climate change.
It has been found that urbanization generally decreases albedo (commonly being 0.01–0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate urban heat island. Ouyang et al. estimated that, on a global scale, "an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing ~44 Gt of CO2 emissions."
Intentionally enhancing the albedo of the Earth's surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved.
Examples of terrestrial albedo effects
Illumination
Albedo is not directly dependent on illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth's surface at that location (e.g. through melting of reflective ice). That said, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics.
Insolation effects
The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes.
Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect.
Climate and weather
Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather.
The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds.
Albedo–temperature feedback
When an area's albedo changes due to snowfall, a snow–temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow–temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating.
Snow
Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (the ice–albedo positive feedback).
Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming.
Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets.
The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions.
Small-scale effects
Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing.
Solar photovoltaic effects
Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications.
Trees
Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo).
In the case of evergreen forests with seasonal snow cover albedo reduction may be great enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts a strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate.
Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit.
In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy.
Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming.
Water
Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations.
At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle.
Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light.
Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh 'black' ice exhibits Fresnel reflection.
Snow on top of this sea ice increases the albedo to 0.9.
Clouds
Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth."
Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as colder than temperatures several miles away under clear skies.
Aerosol effects
Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth's radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain. As per Spracklen et al. the effects are:
Aerosol direct effect. Aerosols directly scatter and absorb radiation. The scattering of radiation causes atmospheric cooling, whereas absorption can cause atmospheric warming.
Aerosol indirect effect. Aerosols modify the properties of clouds through a subset of the aerosol population called cloud condensation nuclei. Increased nuclei concentrations lead to increased cloud droplet number concentrations, which in turn leads to increased cloud albedo, increased light scattering and radiative cooling (first indirect effect), but also leads to reduced precipitation efficiency and increased lifetime of the cloud (second indirect effect).
In extremely polluted cities like Delhi, aerosol pollutants influence local weather and induce an urban cool island effect during the day.
Black carbon
Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m−2, with a range +0.1 to +0.4 W m−2. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo.
Astronomical albedo
In astronomy, the term albedo can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved.
Optical or visual albedo
The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle ("phase angle"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids.
Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds.
The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies.
Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion.
In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation.
An important relationship between an object's astronomical (geometric) albedo, absolute magnitude and diameter is given by:
where is the astronomical albedo, is the diameter in kilometers, and is the absolute magnitude.
Radar albedo
In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, , , or (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power.
Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering.
For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo):
where the denominator is the effective cross-sectional area of the target object with mean radius, . A smooth metallic sphere would have .
Radar albedos of Solar System objects
The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references.
Relationship to surface bulk density
In the event that most of the echo is from first surface reflections ( or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships:
.
History
The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work Photometria.
See also
Cool roof
Daisyworld
Emissivity
Exitance
Global dimming
Ice–albedo feedback
Irradiance
Kirchhoff's law of thermal radiation
Opposition surge
Polar see-saw
Radar astronomy
Solar radiation management
References
External links
Albedo Project
Albedo – Encyclopedia of Earth
NASA MODIS BRDF/albedo product site
Ocean surface albedo look-up-table
Surface albedo derived from Meteosat observations
A discussion of Lunar albedos
reflectivity of metals (chart)
Land surface effects on climate
Climate change feedbacks
Climate forcing
Climatology
Electromagnetic radiation
Meteorological quantities
Radiometry
Scattering, absorption and radiative transfer (optics)
Radiation
1760s neologisms
|
https://en.wikipedia.org/wiki/Altruism
|
Altruism is the principle and practice of concern for the well-being and/or happiness of other humans or animals. While objects of altruistic concern vary, it is an important moral value in many cultures and religions. It may be considered a synonym of selflessness, the opposite of selfishness.
The word altruism was popularized (and possibly coined) by the French philosopher Auguste Comte in French, as , for an antonym of egoism. He derived it from the Italian , which in turn was derived from Latin , meaning "other people" or "somebody else".
Altruism, as observed in populations of organisms, is when an individual performs an action at a cost to themselves (in terms of e.g. pleasure and quality of life, time, probability of survival or reproduction) that benefits, directly or indirectly, another individual, without the expectation of reciprocity or compensation for that action.
Altruism can be distinguished from feelings of loyalty or concern for the common good. The latter are predicated upon social relationships, whilst altruism does not consider relationships. Whether "true" altruism is possible in human psychology is a subject of debate. The theory of psychological egoism suggests that no act of sharing, helping, or sacrificing can be truly altruistic, as the actor may receive an intrinsic reward in the form of personal gratification. The validity of this argument depends on whether such intrinsic rewards qualify as "benefits".
The term altruism may also refer to an ethical doctrine that claims that individuals are morally obliged to benefit others. Used in this sense, it is usually contrasted with egoism, which claims individuals are morally obligated to serve themselves first.
Effective altruism is the use of evidence and reason to determine the most effective ways to benefit others.
The notion of altruism
The concept of altruism has a history in philosophical and ethical thought. The term was coined in the 19th century by the founding sociologist and philosopher of science Auguste Comte, and has become a major topic for psychologists (especially evolutionary psychology researchers), evolutionary biologists, and ethologists. Whilst ideas about altruism from one field can affect the other fields, the different methods and focuses of these fields always lead to different perspectives on altruism. In simple terms, altruism is caring about the welfare of other people and acting to help them.
Scientific viewpoints
Anthropology
Marcel Mauss's essay The Gift contains a passage called "Note on alms". This note describes the evolution of the notion of alms (and by extension of altruism) from the notion of sacrifice. In it, he writes:
Evolutionary explanations
In the Science of ethology (the study of animal behaviour), and more generally in the study of social evolution, altruism refers to behavior by an individual that increases the fitness of another individual while decreasing the fitness of the actor. In evolutionary psychology this term may be applied to a wide range of human behaviors such as charity, emergency aid, help to coalition partners, tipping, courtship gifts, production of public goods, and environmentalism.
Theories of apparently altruistic behavior were by the need to produce ideas compatible with evolutionary origins. Two related strands of research on altruism have emerged from traditional evolutionary analyses and evolutionary game theory: a mathematical model and analysis of behavioral strategies.
Some of the proposed mechanisms are:
Kin selection. That animals and humans are more altruistic towards close kin than to distant kin and non-kin has been confirmed in numerous studies across many different cultures. Even subtle cues indicating kinship may unconsciously increase altruistic behavior. One kinship cue is facial resemblance. One study found that slightly altering photographs to resemble the faces of study participants more closely increased the trust the participants expressed regarding depicted persons. Another cue is having the same family name, especially if rare, which has been found to increase helpful behavior. Another study found more cooperative behavior, the greater the number of perceived kin in a group. Using kinship terms in political speeches increased audience agreement with the speaker in one study. This effect was powerful for firstborns, who are typically close to their families.
Vested interests. People are likely to suffer if their friends, allies and those from similar social ingroups suffer or disappear. Helping such group members may, therefore, also benefit the altruist. Making ingroup membership more noticeable increases cooperativeness. Extreme self-sacrifice towards the ingroup may be adaptive if a hostile outgroup threatens the entire ingroup.
Reciprocal altruism. See also Reciprocity (evolution).
Direct reciprocity. Research shows that it can be beneficial to help others if there is a chance that they will reciprocate the help. The effective tit for tat strategy is one game theoretic example. Many people seem to be following a similar strategy by cooperating if and only if others cooperate in return.
One consequence is that people are more cooperative with one another if they are more likely to interact again in the future. People tend to be less cooperative if they perceive that the frequency of helpers in the population is lower. They tend to help less if they see non-cooperativeness by others, and this effect tends to be stronger than the opposite effect of seeing cooperative behaviors. Simply changing the cooperative framing of a proposal may increase cooperativeness, such as calling it a "Community Game" instead of a "Wall Street Game".
A tendency towards reciprocity implies that people feel obligated to respond if someone helps them. This has been used by charities that give small gifts to potential donors hoping to induce reciprocity. Another method is to announce publicly that someone has given a large donation. The tendency to reciprocate can even generalize, so people become more helpful toward others after being helped. On the other hand, people will avoid or even retaliate against those perceived not to be cooperating. People sometimes mistakenly fail to help when they intended to, or their helping may not be noticed, which may cause unintended conflicts. As such, it may be an optimal strategy to be slightly forgiving of and have a slightly generous interpretation of non-cooperation.
People are more likely to cooperate on a task if they can communicate with one another first. This may be due to better cooperativeness assessments or promises exchange. They are more cooperative if they can gradually build trust instead of being asked to give extensive help immediately. Direct reciprocity and cooperation in a group can be increased by changing the focus and incentives from intra-group competition to larger-scale competitions, such as between groups or against the general population. Thus, giving grades and promotions based only on an individual's performance relative to a small local group, as is common, may reduce cooperative behaviors in the group.
Indirect reciprocity. Because people avoid poor reciprocators and cheaters, a person's reputation is important. A person esteemed for their reciprocity is more likely to receive assistance, even from individuals they haven't directly interacted with before.
Strong reciprocity. This form of reciprocity is expressed by people who invest more resources in cooperation and punishment than what is deemed optimal based on established theories of altruism.
Pseudo-reciprocity. An organism behaves altruistically and the recipient does not reciprocate but has an increased chance of acting in a way that is selfish but also as a byproduct benefits the altruist.
Costly signaling and the handicap principle. Altruism, by diverting resources from the altruist, can act as an "honest signal" of available resources and the skills to acquire them. This may signal to others that the altruist is a valuable potential partner. It may also signal interactive and cooperative intentions, since someone who does not expect to interact further in the future gains nothing from such costly signaling. While it's uncertain if costly signaling can predict long-term cooperative traits, people tend to trust helpers more. Costly signaling loses its value when everyone shares identical traits, resources, and cooperative intentions, but it gains significance as population variability in these aspects increases.
Hunters who share meat display a costly signal of ability. The research found that good hunters have higher reproductive success and more adulterous relations even if they receive no more of the hunted meat than anyone else. Similarly, holding large feasts and giving large donations are ways of demonstrating one's resources. Heroic risk-taking has also been interpreted as a costly signal of ability.
Both indirect reciprocity and costly signaling depend on reputation value and tend to make similar predictions. One is that people will be more helpful when they know that their helping behavior will be communicated to people they will interact with later, publicly announced, discussed, or observed by someone else. This has been documented in many studies. The effect is sensitive to subtle cues, such as people being more helpful when there were stylized eyespots instead of a logo on a computer screen. Weak reputational cues such as eyespots may become unimportant if there are stronger cues present and may lose their effect with continued exposure unless reinforced with real reputational effects. Public displays such as public weeping for dead celebrities and participation in demonstrations may be influenced by a desire to be seen as generous. People who know that they are publicly monitored sometimes even wastefully donate the money they know is not needed by the recipient because of reputational concerns.
Women find altruistic men to be attractive partners. When women look for a long-term partner, altruism may be a trait they prefer as it may indicate that the prospective partner is also willing to share resources with her and her children. Men perform charitable acts in the early stages of a romantic relationship or simply when in the presence of an attractive woman. While both sexes state that kindness is the most preferable trait in a partner, there is some evidence that men place less value on this than women and that women may not be more altruistic in the presence of an attractive man. Men may even avoid altruistic women in short-term relationships, which may be because they expect less success.
People may compete for the social benefit of a burnished reputation, which may cause competitive altruism. On the other hand, in some experiments, a proportion of people do not seem to care about reputation and do not help more, even if this is conspicuous. This may be due to reasons such as psychopathy or that they are so attractive that they need not be seen as altruistic. The reputational benefits of altruism occur in the future compared to the immediate costs of altruism. While humans and other organisms generally place less value on future costs/benefits as compared to those in the present, some have shorter time horizons than others, and these people tend to be less cooperative.
Explicit extrinsic rewards and punishments have sometimes been found to have a counterintuitively inverse effect on behaviors when compared to intrinsic rewards. This may be because such extrinsic incentives may replace (partially or in whole) intrinsic and reputational incentives, motivating the person to focus on obtaining the extrinsic rewards, which may make the thus-incentivized behaviors less desirable. People prefer altruism in others when it appears to be due to a personality characteristic rather than overt reputational concerns; simply pointing out that there are reputational benefits of action may reduce them. This may be used as a derogatory tactic against altruists ("you're just virtue signalling"), especially by those who are non-cooperators. A counterargument is that doing good due to reputational concerns is better than doing no good.
Group selection. It has controversially been argued by some evolutionary scientists such as David Sloan Wilson that natural selection can act at the level of non-kin groups to produce adaptations that benefit a non-kin group, even if these adaptations are detrimental at the individual level. Thus, while altruistic persons may under some circumstances be outcompeted by less altruistic persons at the individual level, according to group selection theory, the opposite may occur at the group level where groups consisting of the more altruistic persons may outcompete groups consisting of the less altruistic persons. Such altruism may only extend to ingroup members while directing prejudice and antagonism against outgroup members (see also in-group favoritism). Many other evolutionary scientists have criticized group selection theory.
Such explanations do not imply that humans consciously calculate how to increase their inclusive fitness when doing altruistic acts. Instead, evolution has shaped psychological mechanisms, such as emotions, that promote certain altruistic behaviors.
The benefits for the altruist may be increased, and the costs reduced by being more altruistic towards certain groups. Research has found that people are more altruistic to kin than to no-kin, to friends than strangers, to those attractive than to those unattractive, to non-competitors than competitors, and to members in-groups than to members of out-groups.
The study of altruism was the initial impetus behind George R. Price's development of the Price equation, a mathematical equation used to study genetic evolution. An interesting example of altruism is found in the cellular slime moulds, such as Dictyostelium mucoroides. These protists live as individual amoebae until starved, at which point they aggregate and form a multicellular fruiting body in which some cells sacrifice themselves to promote the survival of other cells in the fruiting body.
Selective investment theory proposes that close social bonds, and associated emotional, cognitive, and neurohormonal mechanisms, evolved to facilitate long-term, high-cost altruism between those closely depending on one another for survival and reproductive success.
Such cooperative behaviors have sometimes been seen as arguments for left-wing politics, for example, by the Russian zoologist and anarchist Peter Kropotkin in his 1902 book Mutual Aid: A Factor of Evolution and Moral Philosopher Peter Singer in his book A Darwinian Left.
Neurobiology
Jorge Moll and Jordan Grafman, neuroscientists at the National Institutes of Health and LABS-D'Or Hospital Network, provided the first evidence for the neural bases of altruistic giving in normal healthy volunteers, using functional magnetic resonance imaging. In their research, they showed that both pure monetary rewards and charitable donations activated the mesolimbic reward pathway, a primitive part of the brain that usually responds to food and sex. However, when volunteers generously placed the interests of others before their own by making charitable donations, another brain circuit was selectively activated: the subgenual cortex/septal region. These structures social attachment and bonding in other species. The experiment indicated that altruism isn't a higher moral faculty overpowering innate selfish desires, but a fundamental, ingrained, and enjoyable trait in the brain. One brain region, the subgenual anterior cingulate cortex/basal forebrain, contributes to learning altruistic behavior, especially in people with empathy. The same study identified giving to charity and of social bonding.
Bill Harbaugh, a University of Oregon economist, in an fMRI scanner test conducted with his psychologist colleague Dr. Ulrich Mayr, reached the same conclusions as Jorge Moll and Jordan Grafman about giving to charity, although they were able to divide the study group into two groups: "egoists" and "altruists". One of their discoveries was that, though rarely, even some of the considered "egoists" sometimes gave more than expected because that would help others, leading to the conclusion that there are other factors in charity, such as a person's environment and values.
Psychology
The International Encyclopedia of the Social Sciences defines psychological altruism as "a motivational state to increase another's welfare". Psychological altruism is contrasted with psychological egoism, which refers to the motivation to increase one's welfare.
There has been some debate on whether humans are capable of psychological altruism. Some definitions specify a self-sacrificial nature to altruism and a lack of external rewards for altruistic behaviors. However, because altruism ultimately benefits the self in many cases, the selflessness of altruistic acts is difficult to prove. The social exchange theory postulates that altruism only exists when the benefits outweigh the costs to the self.
Daniel Batson, a psychologist, examined this question and argued against the social exchange theory. He identified four significant motives: to ultimately benefit the self (egoism), to ultimately benefit the other person (altruism), to benefit a group (collectivism), or to uphold a moral principle (principlism). Altruism that ultimately serves selfish gains is thus differentiated from selfless altruism, but the general conclusion has been that empathy-induced altruism can be genuinely selfless.
The empathy-altruism hypothesis states that psychological altruism exists and is evoked by the empathic desire to help someone suffering. Feelings of empathic concern are contrasted with personal distress, which compels people to reduce their unpleasant emotions and increase their positive ones by helping someone in need. Empathy is thus not selfless since altruism works either as a way to avoid those negative, unpleasant feelings and have positive, pleasant feelings when triggered by others' need for help or as a way to gain social reward or avoid social punishment by helping. People with empathic concern help others in distress even when exposure to the situation could be easily avoided, whereas those lacking in empathic concern avoid allowing it unless it is difficult or impossible to avoid exposure to another's suffering.
Helping behavior is seen in humans from about two years old when a toddler can understand subtle emotional cues.
In psychological research on altruism, studies often observe altruism as demonstrated through prosocial behaviors such as helping, comforting, sharing, cooperation, philanthropy, and community service. People are most likely to help if they recognize that a person is in need and feel personal responsibility for reducing the person's distress. The number of bystanders witnessing pain or suffering affects the likelihood of helping (the Bystander effect). More significant numbers of bystanders decrease individual feelings of responsibility. However, a witness with a high level of empathic concern is likely to assume personal responsibility entirely regardless of the number of bystanders.
Many studies have observed the effects of volunteerism (as a form of altruism) on happiness and health and have consistently found that those who exhibit volunteerism also have better current and future health and well-being. In a study of older adults, those who volunteered had higher life satisfaction and will to live, and less depression, anxiety, and somatization. Volunteerism and helping behavior have not only been shown to improve mental health but physical health and longevity as well, attributable to the activity and social integration it encourages. One study examined the physical health of mothers who volunteered over 30 years and found that 52% of those who did not belong to a volunteer organization experienced a major illness while only 36% of those who did volunteer experienced one. A study on adults aged 55 and older found that during the four-year study period, people who volunteered for two or more organizations had a 63% lower likelihood of dying. After controlling for prior health status, it was determined that volunteerism accounted for a 44% reduction in mortality. Merely being aware of kindness in oneself and others is also associated with greater well-being. A study that asked participants to count each act of kindness they performed for one week significantly enhanced their subjective happiness.
While research supports the idea that altruistic acts bring about happiness, it has also been found to work in the opposite direction—that happier people are also kinder. The relationship between altruistic behavior and happiness is bidirectional. Studies found that generosity increases linearly from sad to happy affective states.
Feeling over-taxed by the needs of others has negative effects on health and happiness. For example, one study on volunteerism found that feeling overwhelmed by others' demands had an even stronger negative effect on mental health than helping had a positive one (although positive effects were still significant).
Pathological altruism
Pathological altruism is altruism taken to an unhealthy extreme, such that it either harms the altruistic person or the person's well-intentioned actions cause more harm than good.
The term "pathological altruism" was popularised by the book Pathological Altruism.
Examples include depression and burnout seen in healthcare professionals, an unhealthy focus on others to the detriment of one's own needs, hoarding of animals, and ineffective philanthropic and social programs that ultimately worsen the situations they are meant to aid.
Sociology
"Sociologists have long been concerned with how to build the good society". The structure of our societies and how individuals come to exhibit charitable, philanthropic, and other pro-social, altruistic actions for the common good is a commonly researched topic within the field. The American Sociology Association (ASA) acknowledges public sociology saying, "The intrinsic scientific, policy, and public relevance of this field of investigation in helping to construct 'good societies' is unquestionable". This type of sociology seeks contributions that aid popular and theoretical understandings of what motivates altruism and how it is organized, and promotes an altruistic focus in order to benefit the world and people it studies.
How altruism is framed, organized, carried out, and what motivates it at the group level is an area of focus that sociologists investigate in order to contribute back to the groups it studies and "build the good society". The motivation of altruism is also the focus of study; for example, one study links the occurrence of moral outrage to altruistic compensation of victims. Studies show that generosity in laboratory and in online experiments is contagious – people imitate the generosity they observe in others.
Religious viewpoints
Most, if not all, of the world's religions promote altruism as a very important moral value. Buddhism, Christianity, Hinduism, Islam, Jainism, Judaism, and Sikhism, etc., place particular emphasis on altruistic morality.
Buddhism
Altruism figures prominently in Buddhism. Love and compassion are components of all forms of Buddhism, and are focused on all beings equally: love is the wish that all beings be happy, and compassion is the wish that all beings be free from suffering. "Many illnesses can be cured by the one medicine of love and compassion. These qualities are the ultimate source of human happiness, and the need for them lies at the very core of our being" (Dalai Lama).
The notion of altruism is modified in such a world-view, since the belief is that such a practice promotes the practitioner's own happiness: "The more we care for the happiness of others, the greater our own sense of well-being becomes" (Dalai Lama).
In the context of larger ethical discussions on moral action and judgment, Buddhism is characterized by the belief that negative (unhappy) consequences of our actions derive not from punishment or correction based on moral judgment, but from the law of karma, which functions like a natural law of cause and effect. A simple illustration of such cause and effect is the case of experiencing the effects of what one causes: if one causes suffering, then as a natural consequence one would experience suffering; if one causes happiness, then as a natural consequence one would experience happiness.
Jainism
The fundamental principles of Jainism revolve around altruism, not only for humans but for all sentient beings. Jainism preaches – to live and let live, not harming sentient beings, i.e. uncompromising reverence for all life. It also considers all living things to be equal. The first , Rishabhdev, introduced the concept of altruism for all living beings, from extending knowledge and experience to others to donation, giving oneself up for others, non-violence, and compassion for all living things.
The principle of nonviolence seeks to minimize karmas which limit the capabilities of the soul. Jainism views every soul as worthy of respect because it has the potential to become (God in Jainism). Because all living beings possess a soul, great care and awareness is essential in one's actions. Jainism emphasizes the equality of all life, advocating harmlessness towards all, whether the creatures are great or small. This policy extends even to microscopic organisms. Jainism acknowledges that every person has different capabilities and capacities to practice and therefore accepts different levels of compliance for ascetics and householders.
Christianity
Thomas Aquinas interprets "You should love your neighbour as yourself" as meaning that love for ourselves is the exemplar of love for others. Considering that "the love with which a man loves himself is the form and root of friendship" he quotes Aristotle that "the origin of friendly relations with others lies in our relations to ourselves",. Aquinas concluded that though we are not bound to love others more than ourselves, we naturally seek the common good, the good of the whole, more than any private good, the good of a part. However, he thought we should love God more than ourselves and our neighbours, and more than our bodily life—since the ultimate purpose of loving our neighbour is to share in eternal beatitude: a more desirable thing than bodily well-being. In coining the word "altruism", as stated above, Comte was probably opposing this Thomistic doctrine, which is present in some theological schools within Catholicism. The aim and focus of Christian life is a life that glorifies God, with obeying christ's command to treat others equally, caring for them and understanding eternity in heaven is what Jesus Resurrection at calvary was all about.
Many biblical authors draw a strong connection between love of others and love of God. states that for one to love God one must love his fellowman, and that hatred of one's fellowman is the same as hatred of God. Thomas Jay Oord has argued in several books that altruism is but one possible form of love. An altruistic action is not always a loving action. Oord defines altruism as acting for the other's good, and he agrees with feminists who note that sometimes love requires acting for one's own good when the other's demands undermine overall well-being.
German philosopher Max Scheler distinguishes two ways in which the strong can help the weak. One way is a sincere expression of Christian love, "motivated by a powerful feeling of security, strength, and inner salvation, of the invincible fullness of one's own life and existence". Another way is merely "one of the many modern substitutes for love,... nothing but the urge to turn away from oneself and to lose oneself in other people's business". At its worst, Scheler says, "love for the small, the poor, the weak, and the oppressed is really disguised hatred, repressed envy, an impulse to detract, etc., directed against the opposite phenomena: wealth, strength, power, largesse."
Islam
In Islam, "" () (altruism) means "preferring others to oneself". For Sufis, this means devotion to others through complete forgetfulness of one's own concerns, where concern for others is deemed as a demand made by Allah (i.e. God) on the human body, considered to be property of Allah alone. The importance of lies in sacrifice for the sake of the greater good; Islam considers those practicing as abiding by the highest degree of nobility.
This is similar to the notion of chivalry, but unlike that European concept, in . A constant concern for Allah results in a careful attitude towards people, animals, and other things in this world.
Judaism
Judaism defines altruism as the desired goal of creation. Rabbi Abraham Isaac Kook stated that love is the most important attribute in humanity. Love is defined as bestowal, or giving, which is the intention of altruism. This can be altruism towards humanity that leads to altruism towards the creator or God. Kabbalah defines God as the force of giving in existence. Rabbi Moshe Chaim Luzzatto focused on the "purpose of creation" and how the will of God was to bring creation into perfection and adhesion with this force of giving.
Modern Kabbalah developed by Rabbi Yehuda Ashlag, in his writings about the future generation, focuses on how society could achieve an altruistic social framework. Ashlag proposed that such a framework is the purpose of creation, and everything that happens is to raise humanity to the level of altruism, love for one another. Ashlag focused on society and its relation to divinity.
Sikhism
Altruism is essential to the Sikh religion. The central faith in Sikhism is that the greatest deed anyone can do is to imbibe and live the godly qualities like love, affection, sacrifice, patience, harmony, and truthfulness. , or selfless service to the community for its own sake, is an important concept in Sikhism.
The fifth Guru, Arjun Dev, sacrificed his life to uphold "22 carats of pure truth, the greatest gift to humanity", the Guru Granth. The ninth Guru, Tegh Bahadur, sacrificed his head to protect weak and defenseless people against atrocity.
In the late seventeenth century, Guru Gobind Singh (the tenth Guru in Sikhism), was at war with the Mughal rulers to protect the people of different faiths when a fellow Sikh, Bhai Kanhaiya, attended the troops of the enemy. He gave water to both friends and foes who were wounded on the battlefield. Some of the enemy began to fight again and some Sikh warriors were annoyed by Bhai Kanhaiya as he was helping their enemy. Sikh soldiers brought Bhai Kanhaiya before Guru Gobind Singh, and complained of his action that they considered counterproductive to their struggle on the battlefield. "What were you doing, and why?" asked the Guru. "I was giving water to the wounded because I saw your face in all of them", replied Bhai Kanhaiya. The Guru responded, "Then you should also give them ointment to heal their wounds. You were practicing what you were coached in the house of the Guru."
Under the tutelage of the Guru, Bhai Kanhaiya subsequently founded a volunteer corps for altruism, which is still engaged today in doing good to others and in training new recruits for this service.
Hinduism
In Hinduism Selflessness (), Love (), Kindness (), and Forgiveness () are considered as the highest acts of humanity or "". Giving alms to the beggars or poor people is considered as a divine act or "" and Hindus believe it will free their souls from guilt or "" and will led them to heaven or "" in afterlife. Altruism is also the central act of various Hindu mythology and religious poems and songs. Mass donation of clothes to poor people (), or blood donation camp or mass food donation () for poor people is common in various Hindu religious ceremonies.
The Bhagavad Gita supports the doctrine of karma yoga (achieving oneness with God through action) & "Nishkam Karma" or action without expectation / desire for personal gain which can be said to encompass altruism. Altruistic acts are generally celebrated and very well received in Hindu literature and are central to Hindu morality.
Philosophy
There is a wide range of philosophical views on humans' obligations or motivations to act altruistically. Proponents of ethical altruism maintain that individuals are morally obligated to act altruistically. The opposing view is ethical egoism, which maintains that moral agents should always act in their own self-interest. Both ethical altruism and ethical egoism contrast with utilitarianism, which maintains that each agent should act in order to maximise the efficacy of their function and the benefit to both themselves and their co-inhabitants.
A related concept in descriptive ethics is psychological egoism, the thesis that humans always act in their own self-interest and that true altruism is impossible. Rational egoism is the view that rationality consists in acting in one's self-interest (without specifying how this affects one's moral obligations).
Effective altruism
Effective altruism is a philosophy and social movement that uses evidence and reasoning to determine the most effective ways to benefit others. Effective altruism encourages individuals to consider all causes and actions and to act in the way that brings about the greatest positive impact, based upon their values. It is the broad, evidence-based, and cause-neutral approach that distinguishes effective altruism from traditional altruism or charity. Effective altruism is part of the larger movement towards evidence-based practices.
While a substantial proportion of effective altruists have focused on the nonprofit sector, the philosophy of effective altruism applies more broadly to prioritizing the scientific projects, companies, and policy initiatives which can be estimated to save lives, help people, or otherwise have the biggest benefit. People associated with the movement include philosopher Peter Singer, Facebook co founder Dustin Moskovitz, Cari Tuna, Oxford-based researchers William MacAskill and Toby Ord, and professional poker player Liv Boeree.
Genetics
OXTR, CD38, COMT, DRD4, DRD5, IGF2, and GABRB2 are candidate genes for influencing altruistic behavior.
Digital altruism
Digital altruism is the notion that some are willing to freely share information based on the principle of reciprocity and in the belief that in the end, everyone benefits from sharing information via the Internet.
There are three types of digital altruism: (1) "everyday digital altruism", involving expedience, ease, moral engagement, and conformity; (2) "creative digital altruism", involving creativity, heightened moral engagement, and cooperation; and (3) "co-creative digital altruism" involving creativity, moral engagement, and meta cooperative efforts.
See also
Notes
References
External links
Auguste Comte
Defence mechanisms
Morality
Moral psychology
Philanthropy
Social philosophy
Interpersonal relationships
Virtue
|
https://en.wikipedia.org/wiki/ASCII
|
ASCII ( ), abbreviated from American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Because of technical limitations of computer systems at the time it was invented, ASCII has just 128 code points, of which only 95 are , which severely limited its scope. Modern computer systems have evolved to use Unicode, which has millions of code points, but the first 128 of these are the same as the ASCII set.
The Internet Assigned Numbers Authority (IANA) prefers the name US-ASCII for this character encoding.
ASCII is one of the IEEE milestones.
Overview
ASCII was developed from telegraph code. Its first commercial use was in the Teletype Model 33 and the Teletype Model 35 as a seven-bit teleprinter code promoted by Bell data services. Work on the ASCII standard began in May 1961, with the first meeting of the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was published in 1963, underwent a major revision during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists and added features for devices other than teleprinters.
The use of ASCII format for Network Interchange was described in 1969. That document was formally elevated to an Internet Standard in 2015.
Originally based on the (modern) English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the ASCII chart in this article. Ninety-five of the encoded characters are printable: these include the digits 0 to 9, lowercase letters a to z, uppercase letters A to Z, and punctuation symbols. In addition, the original ASCII specification included 33 non-printing control codes which originated with s; most of these are now obsolete, although a few are still commonly used, such as the carriage return, line feed, and tab codes.
For example, lowercase i would be represented in the ASCII encoding by binary 1101001 = hexadecimal 69 (i is the ninth letter) = decimal 105.
Despite being an American standard, ASCII does not have a code point for the cent (¢). It also does not support English terms with diacritical marks such as résumé and jalapeño, or proper nouns with diacritical marks such as Beyoncé.
History
The American Standard Code for Information Interchange (ASCII) was developed under the auspices of a committee of the American Standards Association (ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS). The ASA later became the United States of America Standards Institute (USASI) and ultimately became the American National Standards Institute (ANSI).
With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963, leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code. There was some debate at the time whether there should be more control characters rather than the lowercase alphabet. The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to sticks 6 and 7, and International Organization for Standardization TC 97 SC 2 voted during October to incorporate the change into its draft standard. The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting. Locating the lowercase letters in sticks 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers.
The X3 committee made other changes, including other new characters (the brace and vertical bar characters), renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU was removed). ASCII was subsequently updated as USAS X3.4-1967, then USAS X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986.
Revisions of the ASCII standard:
ASA X3.4-1963
ASA X3.4-1965 (approved, but not published, nevertheless used by IBM 2260 & 2265 Display Stations and IBM 2848 Display Control)
USAS X3.4-1967
USAS X3.4-1968
ANSI X3.4-1977
ANSI X3.4-1986
ANSI X3.4-1986 (R1992)
ANSI X3.4-1986 (R1997)
ANSI INCITS 4-1986 (R2002)
ANSI INCITS 4-1986 (R2007)
(ANSI) INCITS 4-1986[R2012]
(ANSI) INCITS 4-1986[R2017]
In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted (least significant bit first) and recorded on perforated tape. They proposed a 9-track standard for magnetic tape and attempted to deal with some punched card formats.
Design considerations
Bit width
The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard of 1924, FIELDATA (1956), and early EBCDIC (1963), more than 64 codes were required for ASCII.
ITA2 was in turn based on the 5-bit telegraph code that Émile Baudot invented in 1870 and patented in 1874.
The committee debated the possibility of a shift function (like in ITA2), which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission, as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code.
The committee considered an eight-bit code, since eight bits (octets) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit for error checking if desired. Eight-bit machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0.
Internal organization
The code itself was patterned so that most control codes were together and all graphic codes were together, for ease of identification. The first two so-called ASCII sticks (32 positions) were reserved for control characters. The "space" character had to come before graphics to make sorting easier, so it became position 20hex; for the same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets, and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes, as was done in the DEC SIXBIT code (1963). Lowercase letters were therefore not interleaved with uppercase. To keep options available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters, and the letter A was placed in position 41hex to match the draft of the corresponding British standard. The digits 0–9 are prefixed with 011, but the remaining 4 bits correspond to their respective values in binary, making conversion with binary-coded decimal straightforward (for example, 5 in encoded to 0110101, where 5 is 0101 in binary).
Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on mechanical typewriters, not electric typewriters. Mechanical typewriters followed the de facto standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of 23456789- were "#$%_&'() early typewriters omitted 0 and 1, using O (capital letter o) and l (lowercase letter L) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$% were placed in the second stick, positions 1–5, corresponding to the digits 1–5 in the adjacent stick. The parentheses could not correspond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. This was accommodated by removing _ (underscore) from 6 and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with 8 and 9. This discrepancy from typewriters led to bit-paired keyboards, notably the Teletype Model 33, which used the left-shifted layout corresponding to ASCII, differently from traditional mechanical typewriters.
Electric typewriters, notably the IBM Selectric (1961), used a somewhat different layout that has become de facto standard on computers following the IBM PC (1981), especially Model M (1984) and thus shift values for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The /? pair also dates to the No. 2, and the ,< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly -* =+) to :* ;+ -=.
Some then-common typewriter characters were not included, notably ½ ¼ ¢, while ^ ` ~ were included as diacritics for international use, and < > for mathematical use, together with the simple line characters \ | (in addition to common /). The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in the French variation, so the @ was placed in position 40hex, right before the letter A.
The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns.
Character order
ASCII-code order is also called ASCIIbetical order. Collation of data is sometimes done in this order rather than "standard" alphabetical order (collating sequence). The main deviations in ASCII order are:
All uppercase come before lowercase letters; for example, "Z" precedes "a"
Digits and many punctuation marks come before letters
An intermediate order converts uppercase letters to lowercase before comparing ASCII values.
Character set
Character groups
Control characters
ASCII reserves the first 32 code points (numbers 0–31 decimal) and the last one (number 127 decimal) for control characters. These are codes intended to control peripheral devices (such as printers), or to provide meta-information about data streams, such as those stored on magnetic tape. Despite their name, these code points do not represent printable characters although for debugging purposes, "placeholder" symbols (such as those given in ISO 2047 and its predecessors) are assigned.
For example, character 0x0A represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". refers to control characters that do not include carriage return, line feed or white space as non-whitespace control characters. Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup languages, address page and document layout and formatting.
The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream, and sometimes accidental, for example with the meaning of "delete".
Probably the most influential single device affecting the interpretation of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (control-Q, DC1, also known as XON), 19 (control-S, DC3, also known as XOFF), and 127 (delete) became de facto standards. The Model 33 was also notable for taking the description of control-G (code 7, BEL, meaning audibly alert the operator) literally, as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (control-O, shift in) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected.
When a Teletype 33 ASR equipped with the automatic paper tape reader received a control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop; receiving control-Q (XON, transmit on) caused the tape reader to resume. This so-called flow control technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending buffer overflow; it persists to this day in many systems as a manual output control technique. On some systems, control-S retains its meaning, but control-Q is replaced by a second control-S to resume output.
The 33 ASR also could be configured to employ control-R (DC2) and control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively.
Delete vs backspace
The Teletype could not move its typehead backwards, so it did not have a key on its keyboard to send a BS (backspace). Instead, there was a key marked that sent code 127 (DEL). The purpose of this key was to erase mistakes in a manually-input paper tape: the operator had to push a button on the tape punch to back it up, then type the rubout, which punched all holes and replaced the mistake with a character that was intended to be ignored. Teletypes were commonly used with the less-expensive computers from Digital Equipment Corporation (DEC); these systems had to use what keys were available, and thus the DEL character was assigned to erase the previous character. Because of this, DEC video terminals (by default) sent the DEL character for the key marked "Backspace" while the separate key marked "Delete" sent an escape sequence; many other competing terminals sent a BS character for the backspace key.
The Unix terminal driver could only use one character to erase the previous character; this could be set to BS or DEL, but not both, resulting in recurring situations of ambiguity where users had to decide depending on what terminal they were using (shells that allow line editing, such as ksh, bash, and zsh, understand both). The assumption that no key sent a BS character allowed control+H to be used for other purposes, such as the "help" prefix command in GNU Emacs.
Escape
Many more of the control characters have been assigned meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending of other control characters as literals instead of invoking their meaning, an "escape sequence". This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this interpretation has been co-opted and has eventually been changed.
In modern usage, an ESC sent to the terminal usually indicates the start of a command sequence usually in the form of a so-called "ANSI escape code" (or, more properly, a "Control Sequence Introducer") from ECMA-48 (1972) and its successors, beginning with ESC followed by a "[" (left-bracket) character. In contrast, an ESC sent from the terminal is most often used as an out-of-band character used to terminate an operation or special mode, as in the TECO and vi text editors. In graphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its current operation or to exit (terminate) altogether.
End of line
The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this is the newline problem on various operating systems. Teletype machines required that a line of text be terminated with both "carriage return" (which moves the printhead to the beginning of the line) and "line feed" (which advances the paper one line without moving the printhead). The name "carriage return" comes from the fact that on a manual typewriter the carriage holding the paper moves while the typebars that strike the ribbon remain stationary. The entire carriage had to be pushed (returned) to the right in order to position the paper for the next line.
DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or "dumb terminals") came along, the convention was so well established that backward compatibility necessitated continuing to follow it. When Gary Kildall created CP/M, he was inspired by some of the command line interface conventions used in DEC's RT-11 operating system.
Until the introduction of PC DOS in 1981, IBM had no influence in this because their 1970s operating systems used EBCDIC encoding instead of ASCII, and they were oriented toward punch-card input and line printer output on which the concept of "carriage return" was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being loosely based on CP/M, and Windows in turn inherited it from MS-DOS.
Requiring two characters to mark the end of a line introduces unnecessary complexity and ambiguity as to how to interpret each character when encountered by itself. To simplify matters, plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator. Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. On the other hand, the original Macintosh OS, Apple DOS, and ProDOS used carriage return (CR) alone as a line terminator; however, since Apple has now replaced these obsolete operating systems with the Unix-based macOS operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines.
Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings; machines running operating systems such as Multics using LF line endings; and machines running operating systems such as OS/360 that represented lines as a character count followed by the characters of the line and which used EBCDIC rather than ASCII encoding. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between the local conventions and the NVT. The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode. This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention.
End of file/stream
The PDP-6 monitor, and its PDP-10 successor TOPS-10, used control-Z (SUB) as an end-of-file indication for input from a terminal. Some operating systems such as CP/M tracked file length only in units of disk blocks, and used control-Z to mark the end of the actual text in the file. For these reasons, EOF, or end-of-file, was used colloquially and conventionally as a three-letter acronym for control-Z instead of SUBstitute. The end-of-text character (ETX), also known as control-C, was inappropriate for a variety of reasons, while using control-Z as the control character to end a file is analogous to the letter Z's position at the end of the alphabet, and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses the ETX character convention to interrupt and halt a program via an input data stream, usually from a keyboard.
The Unix terminal driver uses the end-of-transmission character (EOT), also known as control-D, to indicate the end of a data stream.
In the C programming language, and in Unix conventions, the null character is used to terminate text strings; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero".
Control code chart
Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers.
Printable characters
Codes 20hex to 7Ehex, known as the printable characters, represent letters, digits, punctuation marks, and a few miscellaneous symbols. There are 95 printable characters in total.
Code 20hex, the "space" character, denotes the space between words, as produced by the space bar of a keyboard. Since the space character is considered an invisible graphic (rather than a control character) it is listed in the table below instead of in the previous section.
Code 7Fhex corresponds to the non-printable "delete" (DEL) control character and is therefore omitted from this chart; it is covered in the previous section's chart. Earlier versions of ASCII used the up arrow instead of the caret (5Ehex) and the left arrow instead of the underscore (5Fhex).
Usage
ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence. His British colleague Hugh McGregor Ross helped to popularize this work according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer–Ross Code in Europe". Because of his extensive work on ASCII, Bemer has been called "the father of ASCII".
On March 11, 1968, US President Lyndon B. Johnson mandated that all computers purchased by the United States Federal Government support ASCII, stating:
I have also approved recommendations of the Secretary of Commerce [Luther H. Hodges] regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations.
All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.
ASCII was the most common character encoding on the World Wide Web until December 2007, when UTF-8 encoding surpassed it; UTF-8 is backward compatible with ASCII.
Variants and derivations
As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to represent all variants, including those that do not preserve ASCII's character-map in the 7-bit range. Furthermore, the ASCII extensions have also been mislabelled as ASCII.
7-bit codes
From early in its development, ASCII was intended to be just one of several national variants of an international character code standard.
Other international standards bodies have ratified character encodings such as ISO 646 (1967) that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£); e.g. with code page 1104. Almost every country needed an adapted version of ASCII, since ASCII suited the needs of only the US and a few other countries. For example, Canada had its own version that supported French characters.
Many other countries developed variants of ASCII to include non-English letters (e.g. é, ñ, ß, Ł), currency symbols (e.g. £, ¥), etc. See also YUSCII (Yugoslavia).
It would share most characters in common, but assign other locally useful characters to several code points reserved for "national use". However, the four years that elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967 caused ASCII's choices for the national use characters to seem to be de facto standards for the world, causing confusion and incompatibility once other countries did begin to make their own assignments to these code points.
ISO/IEC 646, like ASCII, is a 7-bit character set. It does not make any additional codes available, so the same code points encoded different characters in different countries. Escape codes were defined to indicate which national variant applied to a piece of text, but they were rarely used, so it was often impossible to know what variant to work with and, therefore, which character a code represented, and in general, text-processing systems could cope with only one variant anyway.
Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used for accented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc. programmer using their national variant of ISO/IEC 646, rather than ASCII, had to write, and thus read, something such as
ä aÄiÜ = 'Ön'; ü
instead of
{ a[i] = '\n'; }
C trigraphs were created to solve this problem for ANSI C, although their late introduction and inconsistent implementation in compilers limited their use. Many programmers kept their computers on US-ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "{, }" and similar variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing another programmer asking if they should go for lunch, could get "N{ jag har sm|rg}sar" as the answer, which should be "Nä jag har smörgåsar" meaning "No I've got sandwiches".
In Japan and Korea, still a variation of ASCII is used, in which the backslash (5C hex) is rendered as ¥ (a Yen sign, in Japan) or ₩ (a Won sign, in Korea). This means that, for example, the file path C:\Users\Smith is shown as C:¥Users¥Smith (in Japan) or C:₩Users₩Smith (in Korea).
In Europe, teletext character sets, which are variants of ASCII, are used for broadcast TV subtitles, defined by World System Teletext and broadcast using the DVB-TXT standard for embedding teletext into DVB transmissions. In the case that the subtitles were initially authored for teletext and converted, the derived subtitle formats are constrained to the same character sets.
8-bit codes
Eventually, as 8-, 16-, and 32-bit (and later 64-bit) computers began to replace 12-, 18-, and 36-bit computers as the norm, it became common to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit relatives of ASCII. In most cases these developed as true extensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions after the first 128 (i.e., 7-bit) characters.
Encodings include ISCII (India), VISCII (Vietnam). Although these encodings are sometimes referred to as ASCII, true ASCII is defined strictly only by the ANSI standard.
Most early home computer systems developed their own 8-bit character sets containing line-drawing and game glyphs, and often filled in some or all of the control characters from 0 to 31 with more graphics. Kaypro CP/M computers used the "upper" 128 characters for the Greek alphabet.
The PETSCII code Commodore International used for their 8-bit systems is probably unique among post-1970 codes in being based on ASCII-1963, instead of the more common ASCII-1967, such as found on the ZX Spectrum computer. Atari 8-bit computers and Galaksija computers also used ASCII variants.
The IBM PC defined code page 437, which replaced the control characters with graphic symbols such as smiley faces, and mapped additional graphic characters to the upper 128 positions. Operating systems such as DOS supported these code pages, and manufacturers of IBM PCs supported them in hardware. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal as one of the first extensions designed more for international languages than for block graphics. The Macintosh defined Mac OS Roman and Postscript defined another character set: both sets contained "international" letters, typographic symbols and punctuation marks instead of graphics, more like modern character sets.
The ISO/IEC 8859 standard (derived from the DEC-MCS) finally provided a standard that most systems copied (at least as accurately as they copied ASCII, but with many substitutions). A popular further extension designed by Microsoft, Windows-1252 (often mislabeled as ISO-8859-1), added the typographic punctuation marks needed for traditional text printing. ISO-8859-1, Windows-1252, and the original 7-bit ASCII were the most common character encodings until 2008 when UTF-8 became more common.
ISO/IEC 4873 introduced 32 additional control codes defined in the 80–9F hexadecimal range, as part of extending the 7-bit ASCII encoding to become an 8-bit system.
Unicode
Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters and their various encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. While ASCII is limited to 128 characters, Unicode and the UCS support more characters by separating the concepts of unique identification (using natural numbers called code points) and encoding (to 8-, 16-, or 32-bit binary formats, called UTF-8, UTF-16, and UTF-32, respectively).
ASCII was incorporated into the Unicode (1991) character set as the first 128 symbols, so the 7-bit ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with 7-bit ASCII, as a UTF-8 file containing only ASCII characters is identical to an ASCII file containing the same sequence of characters. Even more importantly, forward compatibility is ensured as software that recognizes only 7-bit ASCII characters as special and does not alter bytes with the highest bit set (as is often done to support 8-bit ASCII extensions such as ISO-8859-1) will preserve UTF-8 data unchanged.
See also
3568 ASCII, an asteroid named after the character encoding
Basic Latin (Unicode block) (ASCII as a subset of Unicode)
HTML decimal character rendering
Jargon File, a glossary of computer programmer slang which includes a list of common slang names for ASCII characters
List of computer character sets
List of Unicode characters
Notes
References
Further reading
from:
(facsimile, not machine readable)
External links
Computer-related introductions in 1963
Character sets
Character encoding
Latin-script representations
Presentation layer protocols
American National Standards Institute standards
|
https://en.wikipedia.org/wiki/Algae
|
Algae (, ; : alga ) is an informal term for a large and diverse group of photosynthetic, eukaryotic organisms. It is a polyphyletic grouping that includes species from multiple distinct clades. Included organisms range from unicellular microalgae, such as Chlorella, Prototheca and the diatoms, to multicellular forms, such as the giant kelp, a large brown alga which may grow up to in length. Most are aquatic and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem that are found in land plants. The largest and most complex marine algae are called seaweeds, while the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, Spirogyra and stoneworts. Algae that are carried by water are plankton, specifically phytoplankton.
Algae constitute a polyphyletic group since they do not include a common ancestor, and although their plastids seem to have a single origin, from cyanobacteria, they were acquired in different ways. Green algae are examples of algae that have primary chloroplasts derived from endosymbiotic cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from an endosymbiotic red alga. Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction.
Algae lack the various structures that characterize land plants, such as the phyllids (leaf-like structures) of bryophytes, rhizoids of non-vascular plants, and the roots, leaves, and other organs found in tracheophytes (vascular plants). Most are phototrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy, or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external energy sources and have limited or no photosynthetic apparatus. Some other heterotrophic organisms, such as the apicomplexans, are also derived from cells whose ancestors possessed plastids, but are not traditionally considered as algae. Algae have photosynthetic machinery ultimately derived from cyanobacteria that produce oxygen as a by-product of photosynthesis, unlike other photosynthetic bacteria such as purple and green sulfur bacteria. Fossilized filamentous algae from the Vindhya basin have been dated back to 1.6 to 1.7 billion years ago.
Because of the wide range of types of algae, they have increasing different industrial and traditional applications in human society. Traditional seaweed farming practices have existed for thousands of years and have strong traditions in East Asia food cultures. More modern algaculture applications extend the food traditions for other applications include cattle feed, using algae for bioremediation or pollution control, transforming sunlight into algae fuels or other chemicals used in industrial processes, and in medical and scientific applications. A 2020 review found that these applications of algae could play an important role in carbon sequestration in order to mitigate climate change while providing lucrative value-added products for global economies.
Etymology and study
The singular is the Latin word for 'seaweed' and retains that meaning in English. The etymology is obscure. Although some speculate that it is related to Latin , 'be cold', no reason is known to associate seaweed with temperature. A more likely source is , 'binding, entwining'.
The Ancient Greek word for 'seaweed' was (), which could mean either the seaweed (probably red algae) or a red dye derived from it. The Latinization, , meant primarily the cosmetic rouge. The etymology is uncertain, but a strong candidate has long been some word related to the Biblical (), 'paint' (if not that word itself), a cosmetic eye-shadow used by the ancient Egyptians and other inhabitants of the eastern Mediterranean. It could be any color: black, red, green, or blue.
The study of algae is most commonly called phycology (); the term algology is falling out of use.
Classifications
One definition of algae is that they "have chlorophyll as their primary photosynthetic pigment and lack a sterile covering of cells around their reproductive cells". On the other hand, the colorless Prototheca under Chlorophyta are all devoid of any chlorophyll. Although cyanobacteria are often referred to as "blue-green algae", most authorities exclude all prokaryotes, including cyanobacteria, from the definition of algae.
The algae contain chloroplasts that are similar in structure to cyanobacteria. Chloroplasts contain circular DNA like that in cyanobacteria and are interpreted as representing reduced endosymbiotic cyanobacteria. However, the exact origin of the chloroplasts is different among separate lineages of algae, reflecting their acquisition during different endosymbiotic events. The table below describes the composition of the three major groups of algae. Their lineage relationships are shown in the figure in the upper right. Many of these groups contain some members that are no longer photosynthetic. Some retain plastids, but not chloroplasts, while others have lost plastids entirely.
Phylogeny based on plastid not nucleocytoplasmic genealogy:
Linnaeus, in Species Plantarum (1753), the starting point for modern botanical nomenclature, recognized 14 genera of algae, of which only four are currently considered among algae. In Systema Naturae, Linnaeus described the genera Volvox and Corallina, and a species of Acetabularia (as Madrepora), among the animals.
In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the then new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves.
W. H. Harvey (1811–1866) and Lamouroux (1813) were the first to divide macroscopic algae into four divisions based on their pigmentation. This is the first use of a biochemical criterion in plant systematics. Harvey's four divisions are: red algae (Rhodospermae), brown algae (Melanospermae), green algae (Chlorospermae), and Diatomaceae.
At this time, microscopic algae were discovered and reported by a different group of workers (e.g., O. F. Müller and Ehrenberg) studying the Infusoria (microscopic organisms). Unlike macroalgae, which were clearly viewed as plants, microalgae were frequently considered animals because they are often motile. Even the nonmotile (coccoid) microalgae were sometimes merely seen as stages of the lifecycle of plants, macroalgae, or animals.
Although used as a taxonomic category in some pre-Darwinian classifications, e.g., Linnaeus (1753), de Jussieu (1789), Horaninow (1843), Agassiz (1859), Wilson & Cassin (1864), in further classifications, the "algae" are seen as an artificial, polyphyletic group.
Throughout the 20th century, most classifications treated the following groups as divisions or classes of algae: cyanophytes, rhodophytes, chrysophytes, xanthophytes, bacillariophytes, phaeophytes, pyrrhophytes (cryptophytes and dinophytes), euglenophytes, and chlorophytes. Later, many new groups were discovered (e.g., Bolidophyceae), and others were splintered from older groups: charophytes and glaucophytes (from chlorophytes), many heterokontophytes (e.g., synurophytes from chrysophytes, or eustigmatophytes from xanthophytes), haptophytes (from chrysophytes), and chlorarachniophytes (from xanthophytes).
With the abandonment of plant-animal dichotomous classification, most groups of algae (sometimes all) were included in Protista, later also abandoned in favour of Eukaryota. However, as a legacy of the older plant life scheme, some groups that were also treated as protozoans in the past still have duplicated classifications (see ambiregnal protists).
Some parasitic algae (e.g., the green algae Prototheca and Helicosporidium, parasites of metazoans, or Cephaleuros, parasites of plants) were originally classified as fungi, sporozoans, or protistans of incertae sedis, while others (e.g., the green algae Phyllosiphon and Rhodochytrium, parasites of plants, or the red algae Pterocladiophila and Gelidiocolax mammillatus, parasites of other red algae, or the dinoflagellates Oodinium, parasites of fish) had their relationship with algae conjectured early. In other cases, some groups were originally characterized as parasitic algae (e.g., Chlorochytrium), but later were seen as endophytic algae. Some filamentous bacteria (e.g., Beggiatoa) were originally seen as algae. Furthermore, groups like the apicomplexans are also parasites derived from ancestors that possessed plastids, but are not included in any group traditionally seen as algae.
Relationship to land plants
The first land plants probably evolved from shallow freshwater charophyte algae much like Chara almost 500 million years ago. These probably had an isomorphic alternation of generations and were probably filamentous. Fossils of isolated land plant spores suggest land plants may have been around as long as 475 million years ago.
Morphology
A range of algal morphologies is exhibited, and convergence of features in unrelated groups is common. The only groups to exhibit three-dimensional multicellular thalli are the reds and browns, and some chlorophytes. Apical growth is constrained to subsets of these groups: the florideophyte reds, various browns, and the charophytes. The form of charophytes is quite different from those of reds and browns, because they have distinct nodes, separated by internode 'stems'; whorls of branches reminiscent of the horsetails occur at the nodes. Conceptacles are another polyphyletic trait; they appear in the coralline algae and the Hildenbrandiales, as well as the browns.
Most of the simpler algae are unicellular flagellates or amoeboids, but colonial and nonmotile forms have developed independently among several of the groups. Some of the more common organizational levels, more than one of which may occur in the lifecycle of a species, are
Colonial: small, regular groups of motile cells
Capsoid: individual non-motile cells embedded in mucilage
Coccoid: individual non-motile cells with cell walls
Palmelloid: nonmotile cells embedded in mucilage
Filamentous: a string of connected nonmotile cells, sometimes branching
Parenchymatous: cells forming a thallus with partial differentiation of tissues
In three lines, even higher levels of organization have been reached, with full tissue differentiation. These are the brown algae,—some of which may reach 50 m in length (kelps)—the red algae, and the green algae. The most complex forms are found among the charophyte algae (see Charales and Charophyta), in a lineage that eventually led to the higher land plants. The innovation that defines these nonalgal plants is the presence of female reproductive organs with protective cell layers that protect the zygote and developing embryo. Hence, the land plants are referred to as the Embryophytes.
Turfs
The term algal turf is commonly used but poorly defined. Algal turfs are thick, carpet-like beds of seaweed that retain sediment and compete with foundation species like corals and kelps, and they are usually less than 15 cm tall. Such a turf may consist of one or more species, and will generally cover an area in the order of a square metre or more. Some common characteristics are listed:
Algae that form aggregations that have been described as turfs include diatoms, cyanobacteria, chlorophytes, phaeophytes and rhodophytes. Turfs are often composed of numerous species at a wide range of spatial scales, but monospecific turfs are frequently reported.
Turfs can be morphologically highly variable over geographic scales and even within species on local scales and can be difficult to identify in terms of the constituent species.
Turfs have been defined as short algae, but this has been used to describe height ranges from less than 0.5 cm to more than 10 cm. In some regions, the descriptions approached heights which might be described as canopies (20 to 30 cm).
Physiology
Many algae, particularly species of the Characeae, have served as model experimental organisms to understand the mechanisms of the water permeability of membranes, osmoregulation, turgor regulation, salt tolerance, cytoplasmic streaming, and the generation of action potentials.
Phytohormones are found not only in higher plants, but in algae, too.
Symbiotic algae
Some species of algae form symbiotic relationships with other organisms. In these symbioses, the algae supply photosynthates (organic substances) to the host organism providing protection to the algal cells. The host organism derives some or all of its energy requirements from the algae. Examples are:
Lichens
Lichens are defined by the International Association for Lichenology to be "an association of a fungus and a photosynthetic symbiont resulting in a stable vegetative body having a specific structure". The fungi, or mycobionts, are mainly from the Ascomycota with a few from the Basidiomycota. In nature, they do not occur separate from lichens. It is unknown when they began to associate. One mycobiont associates with the same phycobiont species, rarely two, from the green algae, except that alternatively, the mycobiont may associate with a species of cyanobacteria (hence "photobiont" is the more accurate term). A photobiont may be associated with many different mycobionts or may live independently; accordingly, lichens are named and classified as fungal species. The association is termed a morphogenesis because the lichen has a form and capabilities not possessed by the symbiont species alone (they can be experimentally isolated). The photobiont possibly triggers otherwise latent genes in the mycobiont.
Trentepohlia is an example of a common green alga genus worldwide that can grow on its own or be lichenised. Lichen thus share some of the habitat and often similar appearance with specialized species of algae (aerophytes) growing on exposed surfaces such as tree trunks and rocks and sometimes discoloring them.
Coral reefs
Coral reefs are accumulated from the calcareous exoskeletons of marine invertebrates of the order Scleractinia (stony corals). These animals metabolize sugar and oxygen to obtain energy for their cell-building processes, including secretion of the exoskeleton, with water and carbon dioxide as byproducts. Dinoflagellates (algal protists) are often endosymbionts in the cells of the coral-forming marine invertebrates, where they accelerate host-cell metabolism by generating sugar and oxygen immediately available through photosynthesis using incident light and the carbon dioxide produced by the host. Reef-building stony corals (hermatypic corals) require endosymbiotic algae from the genus Symbiodinium to be in a healthy condition. The loss of Symbiodinium from the host is known as coral bleaching, a condition which leads to the deterioration of a reef.
Sea sponges
Endosymbiontic green algae live close to the surface of some sponges, for example, breadcrumb sponges (Halichondria panicea). The alga is thus protected from predators; the sponge is provided with oxygen and sugars which can account for 50 to 80% of sponge growth in some species.
Life cycle
Rhodophyta, Chlorophyta, and Heterokontophyta, the three main algal divisions, have life cycles which show considerable variation and complexity. In general, an asexual phase exists where the seaweed's cells are diploid, a sexual phase where the cells are haploid, followed by fusion of the male and female gametes. Asexual reproduction permits efficient population increases, but less variation is possible. Commonly, in sexual reproduction of unicellular and colonial algae, two specialized, sexually compatible, haploid gametes make physical contact and fuse to form a zygote. To ensure a successful mating, the development and release of gametes is highly synchronized and regulated; pheromones may play a key role in these processes. Sexual reproduction allows for more variation and provides the benefit of efficient recombinational repair of DNA damages during meiosis, a key stage of the sexual cycle. However, sexual reproduction is more costly than asexual reproduction. Meiosis has been shown to occur in many different species of algae.
Numbers
The Algal Collection of the US National Herbarium (located in the National Museum of Natural History) consists of approximately 320,500 dried specimens, which, although not exhaustive (no exhaustive collection exists), gives an idea of the order of magnitude of the number of algal species (that number remains unknown). Estimates vary widely. For example, according to one standard textbook, in the British Isles the UK Biodiversity Steering Group Report estimated there to be 20,000 algal species in the UK. Another checklist reports only about 5,000 species. Regarding the difference of about 15,000 species, the text concludes: "It will require many detailed field surveys before it is possible to provide a reliable estimate of the total number of species ..."
Regional and group estimates have been made, as well:
5,000–5,500 species of red algae worldwide
"some 1,300 in Australian Seas"
400 seaweed species for the western coastline of South Africa, and 212 species from the coast of KwaZulu-Natal. Some of these are duplicates, as the range extends across both coasts, and the total recorded is probably about 500 species. Most of these are listed in List of seaweeds of South Africa. These exclude phytoplankton and crustose corallines.
669 marine species from California (US)
642 in the check-list of Britain and Ireland
and so on, but lacking any scientific basis or reliable sources, these numbers have no more credibility than the British ones mentioned above. Most estimates also omit microscopic algae, such as phytoplankton.
The most recent estimate suggests 72,500 algal species worldwide.
Distribution
The distribution of algal species has been fairly well studied since the founding of phytogeography in the mid-19th century. Algae spread mainly by the dispersal of spores analogously to the dispersal of Plantae by seeds and spores. This dispersal can be accomplished by air, water, or other organisms. Due to this, spores can be found in a variety of environments: fresh and marine waters, air, soil, and in or on other organisms. Whether a spore is to grow into an organism depends on the combination of the species and the environmental conditions where the spore lands.
The spores of freshwater algae are dispersed mainly by running water and wind, as well as by living carriers. However, not all bodies of water can carry all species of algae, as the chemical composition of certain water bodies limits the algae that can survive within them. Marine spores are often spread by ocean currents. Ocean water presents many vastly different habitats based on temperature and nutrient availability, resulting in phytogeographic zones, regions, and provinces.
To some degree, the distribution of algae is subject to floristic discontinuities caused by geographical features, such as Antarctica, long distances of ocean or general land masses. It is, therefore, possible to identify species occurring by locality, such as "Pacific algae" or "North Sea algae". When they occur out of their localities, hypothesizing a transport mechanism is usually possible, such as the hulls of ships. For example, Ulva reticulata and U. fasciata travelled from the mainland to Hawaii in this manner.
Mapping is possible for select species only: "there are many valid examples of confined distribution patterns." For example, Clathromorphum is an arctic genus and is not mapped far south of there. However, scientists regard the overall data as insufficient due to the "difficulties of undertaking such studies."
Ecology
Algae are prominent in bodies of water, common in terrestrial environments, and are found in unusual environments, such as on snow and ice. Seaweeds grow mostly in shallow marine waters, under deep; however, some such as Navicula pennata have been recorded to a depth of . A type of algae, Ancylonema nordenskioeldii, was found in Greenland in areas known as the 'Dark Zone', which caused an increase in the rate of melting ice sheet. Same algae was found in the Italian Alps, after pink ice appeared on parts of the Presena glacier.
The various sorts of algae play significant roles in aquatic ecology. Microscopic forms that live suspended in the water column (phytoplankton) provide the food base for most marine food chains. In very high densities (algal blooms), these algae may discolor the water and outcompete, poison, or asphyxiate other life forms.
Algae can be used as indicator organisms to monitor pollution in various aquatic systems. In many cases, algal metabolism is sensitive to various pollutants. Due to this, the species composition of algal populations may shift in the presence of chemical pollutants. To detect these changes, algae can be sampled from the environment and maintained in laboratories with relative ease.
On the basis of their habitat, algae can be categorized as: aquatic (planktonic, benthic, marine, freshwater, lentic, lotic), terrestrial, aerial (subaerial), lithophytic, halophytic (or euryhaline), psammon, thermophilic, cryophilic, epibiont (epiphytic, epizoic), endosymbiont (endophytic, endozoic), parasitic, calcifilic or lichenic (phycobiont).
Cultural associations
In classical Chinese, the word is used both for "algae" and (in the modest tradition of the imperial scholars) for "literary talent". The third island in Kunming Lake beside the Summer Palace in Beijing is known as the Zaojian Tang Dao (藻鑒堂島), which thus simultaneously means "Island of the Algae-Viewing Hall" and "Island of the Hall for Reflecting on Literary Talent".
Cultivation
Seaweed farming
Bioreactors
Uses
Agar
Agar, a gelatinous substance derived from red algae, has a number of commercial uses. It is a good medium on which to grow bacteria and fungi, as most microorganisms cannot digest agar.
Alginates
Alginic acid, or alginate, is extracted from brown algae. Its uses range from gelling agents in food, to medical dressings. Alginic acid also has been used in the field of biotechnology as a biocompatible medium for cell encapsulation and cell immobilization. Molecular cuisine is also a user of the substance for its gelling properties, by which it becomes a delivery vehicle for flavours.
Between 100,000 and 170,000 wet tons of Macrocystis are harvested annually in New Mexico for alginate extraction and abalone feed.
Energy source
To be competitive and independent from fluctuating support from (local) policy on the long run, biofuels should equal or beat the cost level of fossil fuels. Here, algae-based fuels hold great promise, directly related to the potential to produce more biomass per unit area in a year than any other form of biomass. The break-even point for algae-based biofuels is estimated to occur by 2025.
Fertilizer
For centuries, seaweed has been used as a fertilizer; George Owen of Henllys writing in the 16th century referring to drift weed in South Wales:
Today, algae are used by humans in many ways; for example, as fertilizers, soil conditioners, and livestock feed. Aquatic and microscopic species are cultured in clear tanks or ponds and are either harvested or used to treat effluents pumped through the ponds. Algaculture on a large scale is an important type of aquaculture in some places. Maerl is commonly used as a soil conditioner.
Nutrition
Naturally growing seaweeds are an important source of food, especially in Asia, leading some to label them as superfoods. They provide many vitamins including: A, B1, B2, B6, niacin, and C, and are rich in iodine, potassium, iron, magnesium, and calcium. In addition, commercially cultivated microalgae, including both algae and cyanobacteria, are marketed as nutritional supplements, such as spirulina, Chlorella and the vitamin-C supplement from Dunaliella, high in beta-carotene.
Algae are national foods of many nations: China consumes more than 70 species, including fat choy, a cyanobacterium considered a vegetable; Japan, over 20 species such as nori and aonori; Ireland, dulse; Chile, cochayuyo. Laver is used to make laverbread in Wales, where it is known as . In Korea, green laver is used to make . It is also used along the west coast of North America from California to British Columbia, in Hawaii and by the Māori of New Zealand. Sea lettuce and badderlocks are salad ingredients in Scotland, Ireland, Greenland, and Iceland. Algae is being considered a potential solution for world hunger problem.
Two popular forms of algae are used in cuisine:
Chlorella: This form of alga is found in freshwater and contains photosynthetic pigments in its chloroplast. It is high in iron, zinc, magnesium, vitamin B2 and Omega-3 fatty acids.
Furthermore, it contains all nine of the essential amino acids the body does not produce on its own
Spirulina: Known otherwise as a cyanobacterium (a prokaryote or a "blue-green alga")
The oils from some algae have high levels of unsaturated fatty acids. For example, Parietochloris incisa is high in arachidonic acid, where it reaches up to 47% of the triglyceride pool. Some varieties of algae favored by vegetarianism and veganism contain the long-chain, essential omega-3 fatty acids, docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA). Fish oil contains the omega-3 fatty acids, but the original source is algae (microalgae in particular), which are eaten by marine life such as copepods and are passed up the food chain. Algae have emerged in recent years as a popular source of omega-3 fatty acids for vegetarians who cannot get long-chain EPA and DHA from other vegetarian sources such as flaxseed oil, which only contains the short-chain alpha-linolenic acid (ALA).
Pollution control
Sewage can be treated with algae, reducing the use of large amounts of toxic chemicals that would otherwise be needed.
Algae can be used to capture fertilizers in runoff from farms. When subsequently harvested, the enriched algae can be used as fertilizer.
Aquaria and ponds can be filtered using algae, which absorb nutrients from the water in a device called an algae scrubber, also known as an algae turf scrubber.
Agricultural Research Service scientists found that 60–90% of nitrogen runoff and 70–100% of phosphorus runoff can be captured from manure effluents using a horizontal algae scrubber, also called an algal turf scrubber (ATS). Scientists developed the ATS, which consists of shallow, 100-foot raceways of nylon netting where algae colonies can form, and studied its efficacy for three years. They found that algae can readily be used to reduce the nutrient runoff from agricultural fields and increase the quality of water flowing into rivers, streams, and oceans. Researchers collected and dried the nutrient-rich algae from the ATS and studied its potential as an organic fertilizer. They found that cucumber and corn seedlings grew just as well using ATS organic fertilizer as they did with commercial fertilizers. Algae scrubbers, using bubbling upflow or vertical waterfall versions, are now also being used to filter aquaria and ponds.
Polymers
Various polymers can be created from algae, which can be especially useful in the creation of bioplastics. These include hybrid plastics, cellulose-based plastics, poly-lactic acid, and bio-polyethylene. Several companies have begun to produce algae polymers commercially, including for use in flip-flops and in surf boards.
Bioremediation
The alga Stichococcus bacillaris has been seen to colonize silicone resins used at archaeological sites; biodegrading the synthetic substance.
Pigments
The natural pigments (carotenoids and chlorophylls) produced by algae can be used as alternatives to chemical dyes and coloring agents.
The presence of some individual algal pigments, together with specific pigment concentration ratios, are taxon-specific: analysis of their concentrations with various analytical methods, particularly high-performance liquid chromatography, can therefore offer deep insight into the taxonomic composition and relative abundance of natural algae populations in sea water samples.
Stabilizing substances
Carrageenan, from the red alga Chondrus crispus, is used as a stabilizer in milk products.
Additional images
See also
AlgaeBase
AlgaePARC
Eutrophication
Iron fertilization
Marimo algae
Microbiofuels
Microphyte
Photobioreactor
Phycotechnology
Plant
Toxoid – anatoxin
References
Bibliography
General
.
Regional
Britain and Ireland
Australia
New Zealand
Europe
Arctic
Greenland
Faroe Islands
.
Canary Islands
Morocco
South Africa
North America
External links
– a database of all algal names including images, nomenclature, taxonomy, distribution, bibliography, uses, extracts
Endosymbiotic events
Polyphyletic groups
Common names of organisms
|
https://en.wikipedia.org/wiki/Bitumen
|
Bitumen (, ) is an immensely viscous constituent of petroleum. Depending on its exact composition it can be a sticky, black liquid or an apparently solid mass that behaves as a liquid over very large time scales. In the U.S., the material is commonly referred to as asphalt. Whether found in natural deposits or refined from petroleum, the substance is classed as a pitch. Prior to the 20th century the term asphaltum was in general use. The word derives from the ancient Greek ἄσφαλτος ásphaltos, which referred to natural bitumen or pitch. The largest natural deposit of bitumen in the world, estimated to contain 10 million tons, is the Pitch Lake of southwest Trinidad.
70% of annual bitumen production destined for road construction, its primary use. In this application bitumen is used to bind aggregate particles like gravel and forms a substance referred to as asphalt concrete, which is colloquially termed asphalt. Its other main uses lie in bituminous waterproofing products, such as roofing felt and roof sealant.
In material sciences and engineering the terms "asphalt" and "bitumen" are often used interchangeably and refer both to natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term "bitumen" for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, "bitumen" is the prevalent term in much of the world; however, in American English, "asphalt" is more commonly used. To help avoid confusion, the phrases "liquid asphalt", "asphalt binder", or "asphalt cement" are used in the U.S. Colloquially, various forms of asphalt are sometimes referred to as "tar", as in the name of the La Brea Tar Pits.
Naturally occurring bitumen is sometimes specified by the term "crude bitumen". Its viscosity is similar to that of cold molasses while the material obtained from the fractional distillation of crude oil boiling at is sometimes referred to as "refined bitumen". The Canadian province of Alberta has most of the world's reserves of natural bitumen in the Athabasca oil sands, which cover , an area larger than England.
Terminology
Etymology
The Latin word traces to the Proto-Indo-European root *gʷet- "pitch"; see that link for other cognates.
The expression "bitumen" originated in the Sanskrit, where we find the words "jatu", meaning "pitch", and "jatu-krit", meaning "pitch creating", "pitch producing" (referring to coniferous or resinous trees). The Latin equivalent is claimed by some to be originally "gwitu-men" (pertaining to pitch), and by others, "pixtumens" (exuding or bubbling pitch), which was subsequently shortened to "bitumen", thence passing via French into English. From the same root is derived the Anglo Saxon word "cwidu" (Mastix), the German word "Kitt" (cement or mastic) and the old Norse word "kvada".
The word "ašphalt" is claimed to have been derived from the Accadian term "asphaltu" or "sphallo," meaning "to split." It was later adopted by the Homeric Greeks in the form of the adjective ἄσφαλἤς, ἐς signifying "firm," "stable," "secure," and the corresponding verb ἄσφαλίξω, ίσω meaning "to make firm or stable," "to secure".
The word "asphalt" is derived from the late Middle English, in turn from French asphalte, based on Late Latin asphalton, asphaltum, which is the latinisation of the Greek (ásphaltos, ásphalton), a word meaning "asphalt/bitumen/pitch", which perhaps derives from , "not, without", i.e. the alpha privative, and (sphallein), "to cause to fall, baffle, (in passive) err, (in passive) be balked of".
The first use of asphalt by the ancients was as a cement to secure or join various objects, and it thus seems likely that the name itself was expressive of this application. Specifically, Herodotus mentioned that bitumen was brought to Babylon to build its gigantic fortification wall.
From the Greek, the word passed into late Latin, and thence into French (asphalte) and English ("asphaltum" and "asphalt"). In French, the term asphalte is used for naturally occurring asphalt-soaked limestone deposits, and for specialised manufactured products with fewer voids or greater bitumen content than the "asphaltic concrete" used to pave roads.
Modern terminology
Bitumen mixed with clay was usually called "asphaltum", but the term is less commonly used today.
In American English, "asphalt" is equivalent to the British "bitumen". However, "asphalt" is also commonly used as a shortened form of "asphalt concrete" (therefore equivalent to the British "asphalt" or "tarmac").
In Canadian English, the word "bitumen" is used to refer to the vast Canadian deposits of extremely heavy crude oil, while "asphalt" is used for the oil refinery product. Diluted bitumen (diluted with naphtha to make it flow in pipelines) is known as "dilbit" in the Canadian petroleum industry, while bitumen "upgraded" to synthetic crude oil is known as "syncrude", and syncrude blended with bitumen is called "synbit".
"Bitumen" is still the preferred geological term for naturally occurring deposits of the solid or semi-solid form of petroleum. "Bituminous rock" is a form of sandstone impregnated with bitumen. The oil sands of Alberta, Canada are a similar material.
Neither of the terms "asphalt" or "bitumen" should be confused with tar or coal tars. Tar is the thick liquid product of the dry distillation and pyrolysis of organic hydrocarbons primarily sourced from vegetation masses, whether fossilized as with coal, or freshly harvested. The majority of bitumen, on the other hand, was formed naturally when vast quantities of organic animal materials were deposited by water and buried hundreds of metres deep at the diagenetic point, where the disorganized fatty hydrocarbon molecules joined in long chains in the absence of oxygen. Bitumen occurs as a solid or highly viscous liquid. It may even be mixed in with coal deposits. Bitumen, and coal using the Bergius process, can be refined into petrols such as gasoline, and bitumen may be distilled into tar, not the other way around.
Composition
Normal composition
The components of bitumen include four main classes of compounds:
Naphthene aromatics (naphthalene), consisting of partially hydrogenated polycyclic aromatic compounds
Polar aromatics, consisting of high molecular weight phenols and carboxylic acids produced by partial oxidation of the material
Saturated hydrocarbons; the percentage of saturated compounds in asphalt correlates with its softening point
Asphaltenes, consisting of high molecular weight phenols and heterocyclic compounds
Bitumen typically contains, elementally 80% by weight of carbon; 10% hydrogen; up to 6% sulfur; and molecularly, between 5 and 25% by weight of asphaltenes dispersed in 90% to 65% maltenes. Most natural bitumens also contain organosulfur compounds, Nickel and vanadium are found at <10 parts per million, as is typical of some petroleum. The substance is soluble in carbon disulfide. It is commonly modelled as a colloid, with asphaltenes as the dispersed phase and maltenes as the continuous phase. "It is almost impossible to separate and identify all the different molecules of bitumen, because the number of molecules with different chemical structure is extremely large".
Asphalt may be confused with coal tar, which is a visually similar black, thermoplastic material produced by the destructive distillation of coal. During the early and mid-20th century, when town gas was produced, coal tar was a readily available byproduct and extensively used as the binder for road aggregates. The addition of coal tar to macadam roads led to the word "tarmac", which is now used in common parlance to refer to road-making materials. However, since the 1970s, when natural gas succeeded town gas, bitumen has completely overtaken the use of coal tar in these applications. Other examples of this confusion include La Brea Tar Pits and the Canadian oil sands, both of which actually contain natural bitumen rather than tar. "Pitch" is another term sometimes informally used at times to refer to asphalt, as in Pitch Lake.
Additives, mixtures and contaminants
For economic and other reasons, bitumen is sometimes sold combined with other materials, often without being labeled as anything other than simply "bitumen".
Of particular note is the use of re-refined engine oil bottoms – "REOB" or "REOBs"the residue of recycled automotive engine oil collected from the bottoms of re-refining vacuum distillation towers, in the manufacture of asphalt. REOB contains various elements and compounds found in recycled engine oil: additives to the original oil and materials accumulating from its circulation in the engine (typically iron and copper). Some research has indicated a correlation between this adulteration of bitumen and poorer-performing pavement.
Occurrence
The majority of bitumen used commercially is obtained from petroleum. Nonetheless, large amounts of bitumen occur in concentrated form in nature. Naturally occurring deposits of bitumen are formed from the remains of ancient, microscopic algae (diatoms) and other once-living things. These natural deposits of bitumen have been formed during the Carboniferous period, when giant swamp forests dominated many parts of the Earth. They were deposited in the mud on the bottom of the ocean or lake where the organisms lived. Under the heat (above 50 °C) and pressure of burial deep in the earth, the remains were transformed into materials such as bitumen, kerogen, or petroleum.
Natural deposits of bitumen include lakes such as the Pitch Lake in Trinidad and Tobago and Lake Bermudez in Venezuela. Natural seeps occur in the La Brea Tar Pits and the McKittrick Tar Pits in California, as well as in the Dead Sea.
Bitumen also occurs in unconsolidated sandstones known as "oil sands" in Alberta, Canada, and the similar "tar sands" in Utah, US.
The Canadian province of Alberta has most of the world's reserves, in three huge deposits covering , an area larger than England or New York state. These bituminous sands contain of commercially established oil reserves, giving Canada the third largest oil reserves in the world. Although historically it was used without refining to pave roads, nearly all of the output is now used as raw material for oil refineries in Canada and the United States.
The world's largest deposit of natural bitumen, known as the Athabasca oil sands, is located in the McMurray Formation of Northern Alberta. This formation is from the early Cretaceous, and is composed of numerous lenses of oil-bearing sand with up to 20% oil. Isotopic studies show the oil deposits to be about 110 million years old. Two smaller but still very large formations occur in the Peace River oil sands and the Cold Lake oil sands, to the west and southeast of the Athabasca oil sands, respectively. Of the Alberta deposits, only parts of the Athabasca oil sands are shallow enough to be suitable for surface mining. The other 80% has to be produced by oil wells using enhanced oil recovery techniques like steam-assisted gravity drainage.
Much smaller heavy oil or bitumen deposits also occur in the Uinta Basin in Utah, US. The Tar Sand Triangle deposit, for example, is roughly 6% bitumen.
Bitumen may occur in hydrothermal veins. An example of this is within the Uinta Basin of Utah, in the US, where there is a swarm of laterally and vertically extensive veins composed of a solid hydrocarbon termed Gilsonite. These veins formed by the polymerization and solidification of hydrocarbons that were mobilized from the deeper oil shales of the Green River Formation during burial and diagenesis.
Bitumen is similar to the organic matter in carbonaceous meteorites. However, detailed studies have shown these materials to be distinct. The vast Alberta bitumen resources are considered to have started out as living material from marine plants and animals, mainly algae, that died millions of years ago when an ancient ocean covered Alberta. They were covered by mud, buried deeply over time, and gently cooked into oil by geothermal heat at a temperature of . Due to pressure from the rising of the Rocky Mountains in southwestern Alberta, 80 to 55 million years ago, the oil was driven northeast hundreds of kilometres and trapped into underground sand deposits left behind by ancient river beds and ocean beaches, thus forming the oil sands.
History
Ancient times
The use of natural bitumen for waterproofing, and as an adhesive dates at least to the fifth millennium BC, with a crop storage basket discovered in Mehrgarh, of the Indus Valley civilization, lined with it. By the 3rd millennium BC refined rock asphalt was in use in the region, and was used to waterproof the Great Bath in Mohenjo-daro.
In the ancient Near East, the Sumerians used natural bitumen deposits for mortar between bricks and stones, to cement parts of carvings, such as eyes, into place, for ship caulking, and for waterproofing. The Greek historian Herodotus said hot bitumen was used as mortar in the walls of Babylon.
The long Euphrates Tunnel beneath the river Euphrates at Babylon in the time of Queen Semiramis () was reportedly constructed of burnt bricks covered with bitumen as a waterproofing agent.
Bitumen was used by ancient Egyptians to embalm mummies. The Persian word for asphalt is moom, which is related to the English word mummy. The Egyptians' primary source of bitumen was the Dead Sea, which the Romans knew as Palus Asphaltites (Asphalt Lake).
In approximately 40 AD, Dioscorides described the Dead Sea material as Judaicum bitumen, and noted other places in the region where it could be found. The Sidon bitumen is thought to refer to material found at Hasbeya in Lebanon. Pliny also refers to bitumen being found in Epirus. Bitumen was a valuable strategic resource. It was the object of the first known battle for a hydrocarbon deposit – between the Seleucids and the Nabateans in 312 BC.
In the ancient Far East, natural bitumen was slowly boiled to get rid of the higher fractions, leaving a thermoplastic material of higher molecular weight that, when layered on objects, became hard upon cooling. This was used to cover objects that needed waterproofing, such as scabbards and other items. Statuettes of household deities were also cast with this type of material in Japan, and probably also in China.
In North America, archaeological recovery has indicated that bitumen was sometimes used to adhere stone projectile points to wooden shafts. In Canada, aboriginal people used bitumen seeping out of the banks of the Athabasca and other rivers to waterproof birch bark canoes, and also heated it in smudge pots to ward off mosquitoes in the summer.
Continental Europe
In 1553, Pierre Belon described in his work Observations that pissasphalto, a mixture of pitch and bitumen, was used in the Republic of Ragusa (now Dubrovnik, Croatia) for tarring of ships.
An 1838 edition of Mechanics Magazine cites an early use of asphalt in France. A pamphlet dated 1621, by "a certain Monsieur d'Eyrinys, states that he had discovered the existence (of asphaltum) in large quantities in the vicinity of Neufchatel", and that he proposed to use it in a variety of ways – "principally in the construction of air-proof granaries, and in protecting, by means of the arches, the water-courses in the city of Paris from the intrusion of dirt and filth", which at that time made the water unusable. "He expatiates also on the excellence of this material for forming level and durable terraces" in palaces, "the notion of forming such terraces in the streets not one likely to cross the brain of a Parisian of that generation".
But the substance was generally neglected in France until the revolution of 1830. In the 1830s there was a surge of interest, and asphalt became widely used "for pavements, flat roofs, and the lining of cisterns, and in England, some use of it had been made of it for similar purposes". Its rise in Europe was "a sudden phenomenon", after natural deposits were found "in France at Osbann (Bas-Rhin), the Parc (Ain) and the Puy-de-la-Poix (Puy-de-Dôme)", although it could also be made artificially. One of the earliest uses in France was the laying of about 24,000 square yards of Seyssel asphalt at the Place de la Concorde in 1835.
United Kingdom
Among the earlier uses of bitumen in the United Kingdom was for etching. William Salmon's Polygraphice (1673) provides a recipe for varnish used in etching, consisting of three ounces of virgin wax, two ounces of mastic, and one ounce of asphaltum. By the fifth edition in 1685, he had included more asphaltum recipes from other sources.
The first British patent for the use of asphalt was "Cassell's patent asphalte or bitumen" in 1834. Then on 25 November 1837, Richard Tappin Claridge patented the use of Seyssel asphalt (patent #7849), for use in asphalte pavement, having seen it employed in France and Belgium when visiting with Frederick Walter Simms, who worked with him on the introduction of asphalt to Britain. Dr T. Lamb Phipson writes that his father, Samuel Ryland Phipson, a friend of Claridge, was also "instrumental in introducing the asphalte pavement (in 1836)".
Claridge obtained a patent in Scotland on 27 March 1838, and obtained a patent in Ireland on 23 April 1838. In 1851, extensions for the 1837 patent and for both 1838 patents were sought by the trustees of a company previously formed by Claridge. Claridge's Patent Asphalte Companyformed in 1838 for the purpose of introducing to Britain "Asphalte in its natural state from the mine at Pyrimont Seysell in France","laid one of the first asphalt pavements in Whitehall". Trials were made of the pavement in 1838 on the footway in Whitehall, the stable at Knightsbridge Barracks, "and subsequently on the space at the bottom of the steps leading from Waterloo Place to St. James Park". "The formation in 1838 of Claridge's Patent Asphalte Company (with a distinguished list of aristocratic patrons, and Marc and Isambard Brunel as, respectively, a trustee and consulting engineer), gave an enormous impetus to the development of a British asphalt industry". "By the end of 1838, at least two other companies, Robinson's and the Bastenne company, were in production", with asphalt being laid as paving at Brighton, Herne Bay, Canterbury, Kensington, the Strand, and a large floor area in Bunhill-row, while meantime Claridge's Whitehall paving "continue(d) in good order". The Bonnington Chemical Works manufactured asphalt using coal tar and by 1839 had installed it in Bonnington.
In 1838, there was a flurry of entrepreneurial activity involving bitumen, which had uses beyond paving. For example, bitumen could also be used for flooring, damp proofing in buildings, and for waterproofing of various types of pools and baths, both of which were also proliferating in the 19th century. One of the earliest surviving examples of its use can be seen at Highgate Cemetery where it was used in 1839 to seal the roof of the terrace catacombs. On the London stockmarket, there were various claims as to the exclusivity of bitumen quality from France, Germany and England. And numerous patents were granted in France, with similar numbers of patent applications being denied in England due to their similarity to each other. In England, "Claridge's was the type most used in the 1840s and 50s".
In 1914, Claridge's Company entered into a joint venture to produce tar-bound macadam, with materials manufactured through a subsidiary company called Clarmac Roads Ltd. Two products resulted, namely Clarmac, and Clarphalte, with the former being manufactured by Clarmac Roads and the latter by Claridge's Patent Asphalte Co., although Clarmac was more widely used. However, the First World War ruined the Clarmac Company, which entered into liquidation in 1915. The failure of Clarmac Roads Ltd had a flow-on effect to Claridge's Company, which was itself compulsorily wound up, ceasing operations in 1917, having invested a substantial amount of funds into the new venture, both at the outset and in a subsequent attempt to save the Clarmac Company.
Bitumen was thought in 19th century Britain to contain chemicals with medicinal properties. Extracts from bitumen were used to treat catarrh and some forms of asthma and as a remedy against worms, especially the tapeworm.
United States
The first use of bitumen in the New World was by aboriginal peoples. On the west coast, as early as the 13th century, the Tongva, Luiseño and Chumash peoples collected the naturally occurring bitumen that seeped to the surface above underlying petroleum deposits. All three groups used the substance as an adhesive. It is found on many different artifacts of tools and ceremonial items. For example, it was used on rattles to adhere gourds or turtle shells to rattle handles. It was also used in decorations. Small round shell beads were often set in asphaltum to provide decorations. It was used as a sealant on baskets to make them watertight for carrying water, possibly poisoning those who drank the water. Asphalt was used also to seal the planks on ocean-going canoes.
Asphalt was first used to pave streets in the 1870s. At first naturally occurring "bituminous rock" was used, such as at Ritchie Mines in Macfarlan in Ritchie County, West Virginia from 1852 to 1873. In 1876, asphalt-based paving was used to pave Pennsylvania Avenue in Washington DC, in time for the celebration of the national centennial.
In the horse-drawn era, US streets were mostly unpaved and covered with dirt or gravel. Especially where mud or trenching often made streets difficult to pass, pavements were sometimes made of diverse materials including wooden planks, cobble stones or other stone blocks, or bricks. Unpaved roads produced uneven wear and hazards for pedestrians. In the late 19th century with the rise of the popular bicycle, bicycle clubs were important in pushing for more general pavement of streets. Advocacy for pavement increased in the early 20th century with the rise of the automobile. Asphalt gradually became an ever more common method of paving. St. Charles Avenue in New Orleans was paved its whole length with asphalt by 1889.
In 1900, Manhattan alone had 130,000 horses, pulling streetcars, wagons, and carriages, and leaving their waste behind. They were not fast, and pedestrians could dodge and scramble their way across the crowded streets. Small towns continued to rely on dirt and gravel, but larger cities wanted much better streets. They looked to wood or granite blocks by the 1850s. In 1890, a third of Chicago's 2000 miles of streets were paved, chiefly with wooden blocks, which gave better traction than mud. Brick surfacing was a good compromise, but even better was asphalt paving, which was easy to install and to cut through to get at sewers. With London and Paris serving as models, Washington laid 400,000 square yards of asphalt paving by 1882; it became the model for Buffalo, Philadelphia and elsewhere. By the end of the century, American cities boasted 30 million square yards of asphalt paving, well ahead of brick. The streets became faster and more dangerous so electric traffic lights were installed. Electric trolleys (at 12 miles per hour) became the main transportation service for middle class shoppers and office workers until they bought automobiles after 1945 and commuted from more distant suburbs in privacy and comfort on asphalt highways.
Canada
Canada has the world's largest deposit of natural bitumen in the Athabasca oil sands, and Canadian First Nations along the Athabasca River had long used it to waterproof their canoes. In 1719, a Cree named Wa-Pa-Su brought a sample for trade to Henry Kelsey of the Hudson's Bay Company, who was the first recorded European to see it. However, it wasn't until 1787 that fur trader and explorer Alexander MacKenzie saw the Athabasca oil sands and said, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance."
The value of the deposit was obvious from the start, but the means of extracting the bitumen was not. The nearest town, Fort McMurray, Alberta, was a small fur trading post, other markets were far away, and transportation costs were too high to ship the raw bituminous sand for paving. In 1915, Sidney Ells of the Federal Mines Branch experimented with separation techniques and used the product to pave 600 feet of road in Edmonton, Alberta. Other roads in Alberta were paved with material extracted from oil sands, but it was generally not economic. During the 1920s Dr. Karl A. Clark of the Alberta Research Council patented a hot water oil separation process and entrepreneur Robert C. Fitzsimmons built the Bitumount oil separation plant, which between 1925 and 1958 produced up to per day of bitumen using Dr. Clark's method. Most of the bitumen was used for waterproofing roofs, but other uses included fuels, lubrication oils, printers ink, medicines, rust- and acid-proof paints, fireproof roofing, street paving, patent leather, and fence post preservatives. Eventually Fitzsimmons ran out of money and the plant was taken over by the Alberta government. Today the Bitumount plant is a Provincial Historic Site.
Photography and art
Bitumen was used in early photographic technology. In 1826, or 1827, it was used by French scientist Joseph Nicéphore Niépce to make the oldest surviving photograph from nature. The bitumen was thinly coated onto a pewter plate which was then exposed in a camera. Exposure to light hardened the bitumen and made it insoluble, so that when it was subsequently rinsed with a solvent only the sufficiently light-struck areas remained. Many hours of exposure in the camera were required, making bitumen impractical for ordinary photography, but from the 1850s to the 1920s it was in common use as a photoresist in the production of printing plates for various photomechanical printing processes.
Bitumen was the nemesis of many artists during the 19th century. Although widely used for a time, it ultimately proved unstable for use in oil painting, especially when mixed with the most common diluents, such as linseed oil, varnish and turpentine. Unless thoroughly diluted, bitumen never fully solidifies and will in time corrupt the other pigments with which it comes into contact. The use of bitumen as a glaze to set in shadow or mixed with other colors to render a darker tone resulted in the eventual deterioration of many paintings, for instance those of Delacroix. Perhaps the most famous example of the destructiveness of bitumen is Théodore Géricault's Raft of the Medusa (1818–1819), where his use of bitumen caused the brilliant colors to degenerate into dark greens and blacks and the paint and canvas to buckle.
Modern use
Global use
The vast majority of refined bitumen is used in construction: primarily as a constituent of products used in paving and roofing applications. According to the requirements of the end use, bitumen is produced to specification. This is achieved either by refining or blending. It is estimated that the current world use of bitumen is approximately 102 million tonnes per year. Approximately 85% of all the bitumen produced is used as the binder in asphalt concrete for roads. It is also used in other paved areas such as airport runways, car parks and footways. Typically, the production of asphalt concrete involves mixing fine and coarse aggregates such as sand, gravel and crushed rock with asphalt, which acts as the binding agent. Other materials, such as recycled polymers (e.g., rubber tyres), may be added to the bitumen to modify its properties according to the application for which the bitumen is ultimately intended.
A further 10% of global bitumen production is used in roofing applications, where its waterproofing qualities are invaluable.
The remaining 5% of bitumen is used mainly for sealing and insulating purposes in a variety of building materials, such as pipe coatings, carpet tile backing and paint. Bitumen is applied in the construction and maintenance of many structures, systems, and components, such as the following:
Highways
Airport runways
Footways and pedestrian ways
Car parks
Racetracks
Tennis courts
Roofing
Damp proofing
Dams
Reservoir and pool linings
Soundproofing
Pipe coatings
Cable coatings
Paints
Building water proofing
Tile underlying waterproofing
Newspaper ink production
and many other applications
Rolled asphalt concrete
The largest use of bitumen is for making asphalt concrete for road surfaces; this accounts for approximately 85% of the bitumen consumed in the United States. There are about 4,000 asphalt concrete mixing plants in the US, and a similar number in Europe.
Asphalt concrete pavement mixes are typically composed of 5% bitumen (known as asphalt cement in the US) and 95% aggregates (stone, sand, and gravel). Due to its highly viscous nature, bitumen must be heated so it can be mixed with the aggregates at the asphalt mixing facility. The temperature required varies depending upon characteristics of the bitumen and the aggregates, but warm-mix asphalt technologies allow producers to reduce the temperature required.
The weight of an asphalt pavement depends upon the aggregate type, the bitumen, and the air void content. An average example in the United States is about 112 pounds per square yard, per inch of pavement thickness.
When maintenance is performed on asphalt pavements, such as milling to remove a worn or damaged surface, the removed material can be returned to a facility for processing into new pavement mixtures. The bitumen in the removed material can be reactivated and put back to use in new pavement mixes. With some 95% of paved roads being constructed of or surfaced with asphalt, a substantial amount of asphalt pavement material is reclaimed each year. According to industry surveys conducted annually by the Federal Highway Administration and the National Asphalt Pavement Association, more than 99% of the bitumen removed each year from road surfaces during widening and resurfacing projects is reused as part of new pavements, roadbeds, shoulders and embankments or stockpiled for future use.
Asphalt concrete paving is widely used in airports around the world. Due to the sturdiness and ability to be repaired quickly, it is widely used for runways.
Mastic asphalt
Mastic asphalt is a type of asphalt that differs from dense graded asphalt (asphalt concrete) in that it has a higher bitumen (binder) content, usually around 7–10% of the whole aggregate mix, as opposed to rolled asphalt concrete, which has only around 5% asphalt. This thermoplastic substance is widely used in the building industry for waterproofing flat roofs and tanking underground. Mastic asphalt is heated to a temperature of and is spread in layers to form an impervious barrier about thick.
Bitumen emulsion
Bitumen emulsions are colloidal mixtures of bitumen and water. Due to the different surface tensions of the two liquids, stable emulsions cannot be created simply by mixing. Therefore, various emulsifiers and stabilizers are added. Emulsifiers are amphiphilic molecules that differ in the charge of their polar head group. They reduce the surface tension of the emulsion and thus prevent bitumen particles from fusing. The emulsifier charge defines the type of emulsion: anionic (negatively charged) and cationic (positively charged). The concentration of an emulsifier is a critical parameter affecting the size of the bitumen particles - higher concentrations lead to smaller bitumen particles. Thus, emulsifiers have a great impact on the stability, viscosity, breaking strength, and adhesion of the bitumen emulsion. The size of bitumen particles is usually between 0.1 and 50 µm with a main fraction between 1 µm and 10 µm. Laser diffraction techniques can be used to determine the particle size distribution quickly and easily. Cationic emulsifiers primarily include long-chain amines such as imidazolines, amido-amines, and diamines, which acquire a positive charge when an acid is added. Anionic emulsifiers are often fatty acids extracted from lignin, tall oil, or tree resin saponified with bases such as NaOH, which creates a negative charge.
During the storage of bitumen emulsions, bitumen particles sediment, agglomerate (flocculation), or fuse (coagulation), which leads to a certain instability of the bitumen emulsion. How fast this process occurs depends on the formulation of the bitumen emulsion but also storage conditions such as temperature and humidity. When emulsified bitumen gets into contact with aggregates, emulsifiers lose their effectiveness, the emulsion breaks down, and an adhering bitumen film is formed referred to as 'breaking'. Bitumen particles almost instantly create a continuous bitumen film by coagulating and separating from water which evaporates. Not each asphalt emulsion reacts as fast as the other when it gets into contact with aggregates. That enables a classification into Rapid-setting (R), Slow-setting (SS), and Medium-setting (MS) emulsions, but also an individual, application-specific optimization of the formulation and a wide field of application (1). For example, Slow-breaking emulsions ensure a longer processing time which is particularly advantageous for fine aggregates (1).
Adhesion problems are reported for anionic emulsions in contact with quartz-rich aggregates. They are substituted by cationic emulsions achieving better adhesion. The extensive range of bitumen emulsions is covered insufficiently by standardization. DIN EN 13808 for cationic asphalt emulsions has been existing since July 2005. Here, a classification of bitumen emulsions based on letters and numbers is described, considering charges, viscosities, and the type of bitumen. The production process of bitumen emulsions is very complex. Two methods are commonly used, the "Colloid mill" method and the "High Internal Phase Ratio (HIPR)" method. In the "Colloid mill" method, a rotor moves at high speed within a stator by adding bitumen and a water-emulsifier mixture. The resulting shear forces generate bitumen particles between 5 µm and 10 µm coated with emulsifiers. The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations.
T The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).he "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).
Bitumen emulsions are used in a wide variety of applications. They are used in road construction and building protection and primarily include the application in cold recycling mixtures, adhesive coating, and surface treatment (1). Due to the lower viscosity in comparison to hot bitumen, processing requires less energy and is associated with significantly less risk of fire and burns. Chipseal involves spraying the road surface with bitumen emulsion followed by a layer of crushed rock, gravel or crushed slag. Slurry seal is a mixture of bitumen emulsion and fine crushed aggregate that is spread on the surface of a road. Cold-mixed asphalt can also be made from bitumen emulsion to create pavements similar to hot-mixed asphalt, several inches in depth, and bitumen emulsions are also blended into recycled hot-mix asphalt to create low-cost pavements. Bitumen emulsion based techniques are known to be useful for all classes of roads, their use may also be possible in the following applications: 1. Asphalts for heavily trafficked roads (based on the use of polymer modified emulsions) 2. Warm emulsion based mixtures, to improve both their maturation time and mechanical properties 3. Half-warm technology, in which aggregates are heated up to 100 degrees, producing mixtures with similar properties to those of hot asphalts 4. High performance surface dressing.
Synthetic crude oil
Synthetic crude oil, also known as syncrude, is the output from a bitumen upgrader facility used in connection with oil sand production in Canada. Bituminous sands are mined using enormous (100-ton capacity) power shovels and loaded into even larger (400-ton capacity) dump trucks for movement to an upgrading facility. The process used to extract the bitumen from the sand is a hot water process originally developed by Dr. Karl Clark of the University of Alberta during the 1920s. After extraction from the sand, the bitumen is fed into a bitumen upgrader which converts it into a light crude oil equivalent. This synthetic substance is fluid enough to be transferred through conventional oil pipelines and can be fed into conventional oil refineries without any further treatment. By 2015 Canadian bitumen upgraders were producing over per day of synthetic crude oil, of which 75% was exported to oil refineries in the United States.
In Alberta, five bitumen upgraders produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder of the output being sold as feedstock to nearby oil refineries and petrochemical plants.
Non-upgraded crude bitumen
Canadian bitumen does not differ substantially from oils such as Venezuelan extra-heavy and Mexican heavy oil in chemical composition, and the real difficulty is moving the extremely viscous bitumen through oil pipelines to the refinery. Many modern oil refineries are extremely sophisticated and can process non-upgraded bitumen directly into products such as gasoline, diesel fuel, and refined asphalt without any preprocessing. This is particularly common in areas such as the US Gulf coast, where refineries were designed to process Venezuelan and Mexican oil, and in areas such as the US Midwest where refineries were rebuilt to process heavy oil as domestic light oil production declined. Given the choice, such heavy oil refineries usually prefer to buy bitumen rather than synthetic oil because the cost is lower, and in some cases because they prefer to produce more diesel fuel and less gasoline. By 2015 Canadian production and exports of non-upgraded bitumen exceeded that of synthetic crude oil at over per day, of which about 65% was exported to the United States.
Because of the difficulty of moving crude bitumen through pipelines, non-upgraded bitumen is usually diluted with natural-gas condensate in a form called dilbit or with synthetic crude oil, called synbit. However, to meet international competition, much non-upgraded bitumen is now sold as a blend of multiple grades of bitumen, conventional crude oil, synthetic crude oil, and condensate in a standardized benchmark product such as Western Canadian Select. This sour, heavy crude oil blend is designed to have uniform refining characteristics to compete with internationally marketed heavy oils such as Mexican Mayan or Arabian Dubai Crude.
Radioactive waste encapsulation matrix
Bitumen was used starting in the 1960s as a hydrophobic matrix aiming to encapsulate radioactive waste such as medium-activity salts (mainly soluble sodium nitrate and sodium sulfate) produced by the reprocessing of spent nuclear fuels or radioactive sludges from sedimentation ponds. Bituminised radioactive waste containing highly radiotoxic alpha-emitting transuranic elements from nuclear reprocessing plants have been produced at industrial scale in France, Belgium and Japan, but this type of waste conditioning has been abandoned because operational safety issues (risks of fire, as occurred in a bituminisation plant at Tokai Works in Japan) and long-term stability problems related to their geological disposal in deep rock formations. One of the main problems is the swelling of bitumen exposed to radiation and to water. Bitumen swelling is first induced by radiation because of the presence of hydrogen gas bubbles generated by alpha and gamma radiolysis. A second mechanism is the matrix swelling when the encapsulated hygroscopic salts exposed to water or moisture start to rehydrate and to dissolve. The high concentration of salt in the pore solution inside the bituminised matrix is then responsible for osmotic effects inside the bituminised matrix. The water moves in the direction of the concentrated salts, the bitumen acting as a semi-permeable membrane. This also causes the matrix to swell. The swelling pressure due to osmotic effect under constant volume can be as high as 200 bar. If not properly managed, this high pressure can cause fractures in the near field of a disposal gallery of bituminised medium-level waste. When the bituminised matrix has been altered by swelling, encapsulated radionuclides are easily leached by the contact of ground water and released in the geosphere. The high ionic strength of the concentrated saline solution also favours the migration of radionuclides in clay host rocks. The presence of chemically reactive nitrate can also affect the redox conditions prevailing in the host rock by establishing oxidizing conditions, preventing the reduction of redox-sensitive radionuclides. Under their higher valences, radionuclides of elements such as selenium, technetium, uranium, neptunium and plutonium have a higher solubility and are also often present in water as non-retarded anions. This makes the disposal of medium-level bituminised waste very challenging.
Different types of bitumen have been used: blown bitumen (partly oxidized with air oxygen at high temperature after distillation, and harder) and direct distillation bitumen (softer). Blown bitumens like Mexphalte, with a high content of saturated hydrocarbons, are more easily biodegraded by microorganisms than direct distillation bitumen, with a low content of saturated hydrocarbons and a high content of aromatic hydrocarbons.
Concrete encapsulation of radwaste is presently considered a safer alternative by the nuclear industry and the waste management organisations.
Other uses
Roofing shingles and roll roofing account for most of the remaining bitumen consumption. Other uses include cattle sprays, fence-post treatments, and waterproofing for fabrics. Bitumen is used to make Japan black, a lacquer known especially for its use on iron and steel, and it is also used in paint and marker inks by some exterior paint supply companies to increase the weather resistance and permanence of the paint or ink, and to make the color darker. Bitumen is also used to seal some alkaline batteries during the manufacturing process.
Production
About 40,000,000 tons were produced in 1984. It is obtained as the "heavy" (i.e., difficult to distill) fraction. Material with a boiling point greater than around 500 °C is considered asphalt. Vacuum distillation separates it from the other components in crude oil (such as naphtha, gasoline and diesel). The resulting material is typically further treated to extract small but valuable amounts of lubricants and to adjust the properties of the material to suit applications. In a de-asphalting unit, the crude bitumen is treated with either propane or butane in a supercritical phase to extract the lighter molecules, which are then separated. Further processing is possible by "blowing" the product: namely reacting it with oxygen. This step makes the product harder and more viscous.
Bitumen is typically stored and transported at temperatures around . Sometimes diesel oil or kerosene are mixed in before shipping to retain liquidity; upon delivery, these lighter materials are separated out of the mixture. This mixture is often called "bitumen feedstock", or BFS. Some dump trucks route the hot engine exhaust through pipes in the dump body to keep the material warm. The backs of tippers carrying asphalt, as well as some handling equipment, are also commonly sprayed with a releasing agent before filling to aid release. Diesel oil is no longer used as a release agent due to environmental concerns.
Oil sands
Naturally occurring crude bitumen impregnated in sedimentary rock is the prime feed stock for petroleum production from "oil sands", currently under development in Alberta, Canada. Canada has most of the world's supply of natural bitumen, covering 140,000 square kilometres (an area larger than England), giving it the second-largest proven oil reserves in the world. The Athabasca oil sands are the largest bitumen deposit in Canada and the only one accessible to surface mining, although recent technological breakthroughs have resulted in deeper deposits becoming producible by in situ methods. Because of oil price increases after 2003, producing bitumen became highly profitable, but as a result of the decline after 2014 it became uneconomic to build new plants again. By 2014, Canadian crude bitumen production averaged about per day and was projected to rise to per day by 2020. The total amount of crude bitumen in Alberta that could be extracted is estimated to be about , which at a rate of would last about 200 years.
Alternatives and bioasphalt
Although uncompetitive economically, bitumen can be made from nonpetroleum-based renewable resources such as sugar, molasses and rice, corn and potato starches. Bitumen can also be made from waste material by fractional distillation of used motor oil, which is sometimes otherwise disposed of by burning or dumping into landfills. Use of motor oil may cause premature cracking in colder climates, resulting in roads that need to be repaved more frequently.
Nonpetroleum-based asphalt binders can be made light-colored. Lighter-colored roads absorb less heat from solar radiation, reducing their contribution to the urban heat island effect. Parking lots that use bitumen alternatives are called green parking lots.
Albanian deposits
Selenizza is a naturally occurring solid hydrocarbon bitumen found in native deposits in Selenice, in Albania, the only European asphalt mine still in use. The bitumen is found in the form of veins, filling cracks in a more or less horizontal direction. The bitumen content varies from 83% to 92% (soluble in carbon disulphide), with a penetration value near to zero and a softening point (ring and ball) around 120 °C. The insoluble matter, consisting mainly of silica ore, ranges from 8% to 17%.
Albanian bitumen extraction has a long history and was practiced in an organized way by the Romans. After centuries of silence, the first mentions of Albanian bitumen appeared only in 1868, when the Frenchman Coquand published the first geological description of the deposits of Albanian bitumen. In 1875, the exploitation rights were granted to the Ottoman government and in 1912, they were transferred to the Italian company Simsa. Since 1945, the mine was exploited by the Albanian government and from 2001 to date, the management passed to a French company, which organized the mining process for the manufacture of the natural bitumen on an industrial scale.
Today the mine is predominantly exploited in an open pit quarry but several of the many underground mines (deep and extending over several km) still remain viable. Selenizza is produced primarily in granular form, after melting the bitumen pieces selected in the mine.
Selenizza is mainly used as an additive in the road construction sector. It is mixed with traditional bitumen to improve both the viscoelastic properties and the resistance to ageing. It may be blended with the hot bitumen in tanks, but its granular form allows it to be fed in the mixer or in the recycling ring of normal asphalt plants. Other typical applications include the production of mastic asphalts for sidewalks, bridges, car-parks and urban roads as well as drilling fluid additives for the oil and gas industry. Selenizza is available in powder or in granular material of various particle sizes and is packaged in sacks or in thermal fusible polyethylene bags.
A life-cycle assessment study of the natural selenizza compared with petroleum bitumen has shown that the environmental impact of the selenizza is about half the impact of the road asphalt produced in oil refineries in terms of carbon dioxide emission.
Recycling
Bitumen is a commonly recycled material in the construction industry. The two most common recycled materials that contain bitumen are reclaimed asphalt pavement (RAP) and reclaimed asphalt shingles (RAS). RAP is recycled at a greater rate than any other material in the United States, and typically contains approximately 5–6% bitumen binder. Asphalt shingles typically contain 20–40% bitumen binder.
Bitumen naturally becomes stiffer over time due to oxidation, evaporation, exudation, and physical hardening. For this reason, recycled asphalt is typically combined with virgin asphalt, softening agents, and/or rejuvenating additives to restore its physical and chemical properties.
For information on the processing and performance of RAP and RAS, see Asphalt Concrete.
For information on the different types of RAS and associated health and safety concerns, see Asphalt Shingles.
For information on in-place recycling methods used to restore pavements and roadways, see Road Surface.
Economics
Although bitumen typically makes up only 4 to 5 percent (by weight) of the pavement mixture, as the pavement's binder, it is also the most expensive part of the cost of the road-paving material.
During bitumen's early use in modern paving, oil refiners gave it away. However, bitumen is a highly traded commodity today. Its prices increased substantially in the early 21st Century. A U.S. government report states:
"In 2002, asphalt sold for approximately $160 per ton. By the end of 2006, the cost had doubled to approximately $320 per ton, and then it almost doubled again in 2012 to approximately $610 per ton."
The report indicates that an "average" 1-mile (1.6-kilometer)-long, four-lane highway would include "300 tons of asphalt," which, "in 2002 would have cost around $48,000. By 2006 this would have increased to $96,000 and by 2012 to $183,000... an increase of about $135,000 for every mile of highway in just 10 years."
Health and safety
People can be exposed to bitumen in the workplace by breathing in fumes or skin absorption. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 5 mg/m3 over a 15-minute period.
Bitumen is basically an inert material that must be heated or diluted to a point where it becomes workable for the production of materials for paving, roofing, and other applications. In examining the potential health hazards associated with bitumen, the International Agency for Research on Cancer (IARC) determined that it is the application parameters, predominantly temperature, that affect occupational exposure and the potential bioavailable carcinogenic hazard/risk of the bitumen emissions. In particular, temperatures greater than 199 °C (390 °F), were shown to produce a greater exposure risk than when bitumen was heated to lower temperatures, such as those typically used in asphalt pavement mix production and placement. IARC has classified paving asphalt fumes as a Class 2B possible carcinogen, indicating inadequate evidence of carcinogenicity in humans.
In 2020, scientists reported that bitumen currently is a significant and largely overlooked source of air pollution in urban areas, especially during hot and sunny periods.
A bitumen-like substance found in the Himalayas and known as shilajit is sometimes used as an Ayurveda medicine, but is not in fact a tar, resin or bitumen.
See also
Asphalt plant
Asphaltene
Bioasphalt
Bitumen-based fuel
Bituminous rocks
Blacktop
Cariphalte
Duxit
Macadam
Oil sands
Pitch drop experiment
Pitch (resin)
Road surface
Tar
Tarmac
Sealcoat
Stamped asphalt
Notes
References
Sources
Barth, Edwin J. (1962), Asphalt: Science and Technology, Gordon and Breach. .
External links
Pavement Interactive – Asphalt
CSU Sacramento, The World Famous Asphalt Museum!
National Institute for Occupational Safety and Health – Asphalt Fumes
Scientific American, "Asphalt", 20-Aug-1881, pp. 121
Amorphous solids
Building materials
Chemical mixtures
IARC Group 2B carcinogens
Pavements
Petroleum products
Road construction materials
|
https://en.wikipedia.org/wiki/Alphabet
|
An alphabet is a standardized set of basic written graphemes (called letters) representing phonemes, units of sounds that distinguish words, of certain spoken languages. Not all writing systems represent language in this way; in a syllabary, each character represents a syllable, and logographic systems use characters to represent words, morphemes, or other semantic units.
The Egyptians have created the first alphabet in a technical sense. The short uniliteral signs are used to write pronunciation guides for logograms, or a character that represents a word, or morpheme, and later on, being used to write foreign words. This was used up to the 5th century AD. The first fully phonemic script, the Proto-Sinaitic script, which developed into the Phoenician alphabet, is considered to be the first alphabet and is the ancestor of most modern alphabets, abjads, and abugidas, including Arabic, Cyrillic, Greek, Hebrew, Latin, and possibly Brahmic. It was created by Semitic-speaking workers and slaves in the Sinai Peninsula in modern-day Egypt, by selecting a small number of hieroglyphs commonly seen in their Egyptian surroundings to describe the sounds, as opposed to the semantic values of the Canaanite languages.
Peter T. Daniels distinguishes an abugida, a set of graphemes that represent consonantal base letters that diacritics modify to represent vowels, like in Devanagari and other South Asian scripts, an abjad, in which letters predominantly or exclusively represent consonants such as the original Phoenician, Hebrew or Arabic, and an alphabet, a set of graphemes that represent both consonants and vowels. In this narrow sense of the word, the first true alphabet was the Greek alphabet, which was based on the earlier Phoenician abjad.
Alphabets are usually associated with a standard ordering of letters. This makes them useful for purposes of collation, which allows words to be sorted in a specific order, commonly known as the alphabetical order. It also means that their letters can be used as an alternative method of "numbering" ordered items, in such contexts as numbered lists and number placements. There are also names for letters in some languages. This is known as acrophony; It is present in some modern scripts, such as Greek, and many Semitic scripts, such as Arabic, Hebrew, and Syriac. It was used in some ancient alphabets, such as in Phoenician. However, this system is not present in all languages, such as the Latin alphabet, which adds a vowel after a character for each letter. Some systems also used to have this system but later on abandoned it for a system similar to Latin, such as Cyrillic.
Etymology
The English word alphabet came into Middle English from the Late Latin word , which in turn originated in the Greek, ἀλφάβητος (alphábētos); it was made from the first two letters of the Greek alphabet, alpha (α) and beta (β). The names for the Greek letters, in turn, came from the first two letters of the Phoenician alphabet: aleph, the word for ox, and bet, the word for house.
History
Ancient Near Eastern alphabets
The Ancient Egyptian writing system had a set of some 24 hieroglyphs that are called uniliterals, which are glyphs that provide one sound. These glyphs were used as pronunciation guides for logograms, to write grammatical inflections, and, later, to transcribe loan words and foreign names. The script was used a fair amount in the 4th century CE. However, after pagan temples were closed down, it was forgotten in the 5th century until the discovery of the Rosetta Stone. There was also the Cuneiform script. The script was used to write several ancient languages. However, it was primarily used to write Sumerian. The last known use of the Cuneiform script was in 75 CE, after which the script fell out of use.
In the Middle Bronze Age, an apparently "alphabetic" system known as the Proto-Sinaitic script appeared in Egyptian turquoise mines in the Sinai peninsula dated 15th century BCE, apparently left by Canaanite workers. In 1999, John and Deborah Darnell, American Egyptologists, discovered an earlier version of this first alphabet at the Wadi el-Hol valley in Egypt. The script dated to 1800 BCE and shows evidence of having been adapted from specific forms of Egyptian hieroglyphs that could be dated to 2000 BCE, strongly suggesting that the first alphabet had developed about that time. The script was based on letter appearances and names, believed to be based on Egyptian hieroglyphs. This script had no characters representing vowels. Originally, it probably was a syllabary—a script where syllables are represented with characters—with symbols that were not needed being removed. The best-attested Bronze Age alphabet is Ugaritic, invented in Ugarit (Syria) before the 15th century BCE. This was an alphabetic cuneiform script with 30 signs, including three that indicate the following vowel. This script was not used after the destruction of Ugarit in 1178 BCE.The Proto-Sinaitic script eventually developed into the Phoenician alphabet, conventionally called "Proto-Canaanite" before 1050 BCE. The oldest text in Phoenician script is an inscription on the sarcophagus of King Ahiram 1000 BCE. This script is the parent script of all western alphabets. By the tenth century BCE, two other forms distinguish themselves, Canaanite and Aramaic. The Aramaic gave rise to the Hebrew script.
The South Arabian alphabet, a sister script to the Phoenician alphabet, is the script from which the Ge'ez alphabet, an abugida, a writing system where consonant-vowel sequences are written as units, which was used around the horn of Africa, descended. Vowel-less alphabets are called abjads, currently exemplified in others such as Arabic, Hebrew, and Syriac. The omission of vowels was not always a satisfactory solution due to the need of preserving sacred texts. "Weak" consonants are used to indicate vowels. These letters have a dual function since they can also be used as pure consonants.
The Proto-Sinaitic script and the Ugaritic script were the first scripts with a limited number of signs instead of using many different signs for words, in contrast to the other widely used writing systems at the time, Cuneiform, Egyptian hieroglyphs, and Linear B. The Phoenician script was probably the first phonemic script, and it contained only about two dozen distinct letters, making it a script simple enough for traders to learn. Another advantage of the Phoenician alphabet was that it could write different languages since it recorded words phonemically.
The Phoenician script was spread across the Mediterranean by the Phoenicians. The Greek Alphabet was the first alphabet in which vowels have independent letter forms separate from those of consonants. The Greeks chose letters representing sounds that did not exist in Phoenician to represent vowels. The syllabical Linear B, a script that was used by the Mycenaean Greeks from the 16th century BCE, had 87 symbols, including five vowels. In its early years, there were many variants of the Greek alphabet, causing many different alphabets to evolve from it.
European alphabets
The Greek alphabet, in Euboean form, was carried over by Greek colonists to the Italian peninsula -600 BCE giving rise to many different alphabets used to write the Italic languages, like the Etruscan alphabet. One of these became the Latin alphabet, which spread across Europe as the Romans expanded their republic. After the fall of the Western Roman Empire, the alphabet survived in intellectual and religious works. It came to be used for the descendant languages of Latin (the Romance languages) and most of the other languages of western and central Europe. Today, it is the most widely used script in the world.
The Etruscan alphabet remained nearly unchanged for several hundred years. Only evolving once the Etruscan language changed itself. The letters used for non-existent phonemes were dropped. Afterwards, however, the alphabet went through many different changes. The final classical form of Etruscan contained 20 letters. Four of them are vowels (a, e, i, and u). Six fewer letters than the earlier forms. The script in its classical form was used until the 1st century CE. The Etruscan language itself was not used in imperial Rome, but the script was used for religious texts.
Some adaptations of the Latin alphabet have ligatures, a combination of two letters make one, such as æ in Danish and Icelandic and Ȣ in Algonquian; borrowings from other alphabets, such as the thorn þ in Old English and Icelandic, which came from the Futhark runes; and modified existing letters, such as the eth ð of Old English and Icelandic, which is a modified d. Other alphabets only use a subset of the Latin alphabet, such as Hawaiian and Italian, which uses the letters j, k, x, y, and w only in foreign words.
Another notable script is Elder Futhark, believed to have evolved out of one of the Old Italic alphabets. Elder Futhark gave rise to other alphabets known collectively as the Runic alphabets. The Runic alphabets were used for Germanic languages from 100 CE to the late Middle Ages, being engraved on stone and jewelry, although inscriptions found on bone and wood occasionally appear. These alphabets have since been replaced with the Latin alphabet. The exception was for decorative use, where the runes remained in use until the 20th century.
The Old Hungarian script was the writing system of the Hungarians. It was in use during the entire history of Hungary, albeit not as an official writing system. From the 19th century, it once again became more and more popular.
The Glagolitic alphabet was the initial script of the liturgical language Old Church Slavonic and became, together with the Greek uncial script, the basis of the Cyrillic script. Cyrillic is one of the most widely used modern alphabetic scripts and is notable for its use in Slavic languages and also for other languages within the former Soviet Union. Cyrillic alphabets include Serbian, Macedonian, Bulgarian, Russian, Belarusian, and Ukrainian. The Glagolitic alphabet is believed to have been created by Saints Cyril and Methodius, while the Cyrillic alphabet was created by Clement of Ohrid, their disciple. They feature many letters that appear to have been borrowed from or influenced by Greek and Hebrew.
Asian alphabets
Beyond the logographic Chinese writing, many phonetic scripts exist in Asia. The Arabic alphabet, Hebrew alphabet, Syriac alphabet, and other abjads of the Middle East are developments of the Aramaic alphabet.
Most alphabetic scripts of India and Eastern Asia descend from the Brahmi script, believed to be a descendant of Aramaic.
Hangul
In Korea, Sejong the Great created the Hangul alphabet in 1443 CE. Hangul is a unique alphabet: it is a featural alphabet, where the design of many of the letters comes from a sound's place of articulation, like P looking like the widened mouth and L looking like the tongue pulled in. The creation of Hangul was planned by the government of the day, and it places individual letters in syllable clusters with equal dimensions, in the same way as Chinese characters. This change allows for mixed-script writing, where one syllable always takes up one type space no matter how many letters get stacked into building that one sound-block.
Zhuyin
Zhuyin, sometimes referred to as Bopomofo, is a semi-syllabary. It transcribes Mandarin phonetically in the Republic of China. After the later establishment of the People's Republic of China and its adoption of Hanyu Pinyin, the use of Zhuyin today is limited. However, it is still widely used in Taiwan. Zhuyin developed from a form of Chinese shorthand based on Chinese characters in the early 1900s and has elements of both an alphabet and a syllabary. Like an alphabet, the phonemes of syllable initials are represented by individual symbols, but like a syllabary, the phonemes of the syllable finals are not; each possible final (excluding the medial glide) has its own character, an example being luan written as ㄌㄨㄢ (l-u-an). The last symbol ㄢ takes place as the entire final -an. While Zhuyin is not a mainstream writing system, it is still often used in ways similar to a romanization system, for aiding pronunciation and as an input method for Chinese characters on computers and cellphones.
Romanization
European alphabets, especially Latin and Cyrillic, have been adapted for many languages of Asia. Arabic is also widely used, sometimes as an abjad, as with Urdu and Persian, and sometimes as a complete alphabet, as with Kurdish and Uyghur.
Types
The term "alphabet" is used by linguists and paleographers in both a wide and a narrow sense. In a broader sense, an alphabet is a segmental script at the phoneme level—that is, it has separate glyphs for individual sounds and not for larger units such as syllables or words. In the narrower sense, some scholars distinguish "true" alphabets from two other types of segmental script, abjads, and abugidas. These three differ in how they treat vowels. Abjads have letters for consonants and leave most vowels unexpressed. Abugidas are also consonant-based but indicate vowels with diacritics, a systematic graphic modification of the consonants. The earliest known alphabet using this sense is the Wadi el-Hol script, believed to be an abjad. Its successor, Phoenician, is the ancestor of modern alphabets, including Arabic, Greek, Latin (via the Old Italic alphabet), Cyrillic (via the Greek alphabet), and Hebrew (via Aramaic).
Examples of present-day abjads are the Arabic and Hebrew scripts; true alphabets include Latin, Cyrillic, and Korean Hangul; and abugidas, used to write Tigrinya, Amharic, Hindi, and Thai. The Canadian Aboriginal syllabics are also an abugida, rather than a syllabary, as their name would imply, because each glyph stands for a consonant and is modified by rotation to represent the following vowel. In a true syllabary, each consonant-vowel combination gets represented by a separate glyph.
All three types may be augmented with syllabic glyphs. Ugaritic, for example, is essentially an abjad but has syllabic letters for These are the only times that vowels are indicated. Coptic has a letter for . Devanagari is typically an abugida augmented with dedicated letters for initial vowels, though some traditions use अ as a zero consonant as the graphic base for such vowels.
The boundaries between the three types of segmental scripts are not always clear-cut. For example, Sorani Kurdish is written in the Arabic script, which, when used for other languages, is an abjad. In Kurdish, writing the vowels is mandatory, and whole letters are used, so the script is a true alphabet. Other languages may use a Semitic abjad with forced vowel diacritics, effectively making them abugidas. On the other hand, the Phagspa script of the Mongol Empire was based closely on the Tibetan abugida, but vowel marks are written after the preceding consonant rather than as diacritic marks. Although short a is not written, as in the Indic abugidas, The source of the term "abugida", namely the Ge'ez abugida now used for Amharic and Tigrinya, has assimilated into their consonant modifications. It is no longer systematic and must be learned as a syllabary rather than as a segmental script. Even more extreme, the Pahlavi abjad eventually became logographic.
Thus the primary categorisation of alphabets reflects how they treat vowels. For tonal languages, further classification can be based on their treatment of tone. Though names do not yet exist to distinguish the various types. Some alphabets disregard tone entirely, especially when it does not carry a heavy functional load, as in Somali and many other languages of Africa and the Americas. Most commonly, tones are indicated by diacritics, which is how vowels are treated in abugidas, which is the case for Vietnamese (a true alphabet) and Thai (an abugida). In Thai, the tone is determined primarily by a consonant, with diacritics for disambiguation. In the Pollard script, an abugida, vowels are indicated by diacritics. The placing of the diacritic relative to the consonant is modified to indicate the tone. More rarely, a script may have separate letters for tones, as is the case for Hmong and Zhuang. For many, regardless of whether letters or diacritics get used, the most common tone is not marked, just as the most common vowel is not marked in Indic abugidas. In Zhuyin, not only is one of the tones unmarked; but there is a diacritic to indicate a lack of tone, like the virama of Indic.
Alphabetical order
Alphabets often come to be associated with a standard ordering of their letters; this is for collation—namely, for listing words and other items in alphabetical order.
Latin alphabets
The basic ordering of the Latin alphabet (A B C D E F G H I J K L M N O P Q R S T U V W X Y Z), which derives from the Northwest Semitic "Abgad" order, is already well established. Although, languages using this alphabet have different conventions for their treatment of modified letters (such as the French é, à, and ô) and certain combinations of letters (multigraphs). In French, these are not considered to be additional letters for collation. However, in Icelandic, the accented letters such as á, í, and ö are considered distinct letters representing different vowel sounds from sounds represented by their unaccented counterparts. In Spanish, ñ is considered a separate letter, but accented vowels such as á and é are not. The ll and ch were also formerly considered single letters and sorted separately after l and c, but in 1994, the tenth congress of the Association of Spanish Language Academies changed the collating order so that ll came to be sorted between lk and lm in the dictionary and ch came to be sorted between cg and ci; those digraphs were still formally designated as letters, but in 2010 the changed it, so they are no longer considered letters at all.
In German, words starting with sch- (which spells the German phoneme ) are inserted between words with initial sca- and sci- (all incidentally loanwords) instead of appearing after the initial sz, as though it were a single letter, which contrasts several languages such as Albanian, in which dh-, ë-, gj-, ll-, rr-, th-, xh-, and zh-, which all represent phonemes and considered separate single letters, would follow the letters d, e, g, l, n, r, t, x, and z, respectively, as well as Hungarian and Welsh. Further, German words with an umlaut get collated ignoring the umlaut as—contrary to Turkish, which adopted the graphemes ö and ü, and where a word like tüfek would come after tuz, in the dictionary. An exception is the German telephone directory, where umlauts are sorted like ä=ae since names such as Jäger also appear with the spelling Jaeger and are not distinguished in the spoken language.
The Danish and Norwegian alphabets end with æ—ø—å, whereas the Swedish conventionally put å—ä—ö at the end. However, æ phonetically corresponds with ä, as does ø and ö.
Early alphabets
It is unknown whether the earliest alphabets had a defined sequence. Some alphabets today, such as the Hanuno'o script, are learned one letter at a time, in no particular order, and are not used for collation where a definite order is required. However, a dozen Ugaritic tablets from the fourteenth century BCE preserve the alphabet in two sequences. One, the ABCDE order later used in Phoenician, has continued with minor changes in Hebrew, Greek, Armenian, Gothic, Cyrillic, and Latin; the other, HMĦLQ, was used in southern Arabia and is preserved today in Ethiopic. Both orders have therefore been stable for at least 3000 years.
Runic used an unrelated Futhark sequence, which got simplified later on. Arabic uses usually uses its sequence, although Arabic retains the traditional abjadi order, which is used for numbers.
The Brahmic family of alphabets used in India uses a unique order based on phonology: The letters are arranged according to how and where the sounds get produced in the mouth. This organization is present in Southeast Asia, Tibet, Korean hangul, and even Japanese kana, which is not an alphabet.
Acrophony
In Phoenician, each letter got associated with a word that begins with that sound. This is called acrophony and is continuously used to varying degrees in Samaritan, Aramaic, Syriac, Hebrew, Greek, and Arabic.
Acrophony got abandoned in Latin. It referred to the letters by adding a vowel (usually "e", sometimes "a", or "u") before or after the consonant. Two exceptions were Y and Z, which were borrowed from the Greek alphabet rather than Etruscan. They were known as Y Graeca "Greek Y" and zeta (from Greek)—this discrepancy was inherited by many European languages, as in the term zed for Z in all forms of English, other than American English. Over time names sometimes shifted or were added, as in double U for W, or "double V" in French, the English name for Y, and the American zee for Z. Comparing them in English and French gives a clear reflection of the Great Vowel Shift: A, B, C, and D are pronounced in today's English, but in contemporary French they are . The French names (from which the English names got derived) preserve the qualities of the English vowels before the Great Vowel Shift. By contrast, the names of F, L, M, N, and S () remain the same in both languages because "short" vowels were largely unaffected by the Shift.
In Cyrillic, originally, acrophony was present using Slavic words. The first three words going, azŭ, buky, vědě, with the Cyrillic collation order being, А, Б, В. However, this was later abandoned in favor of a system similar to Latin.
Orthography and pronunciation
When an alphabet is adopted or developed to represent a given language, an orthography generally comes into being, providing rules for spelling words, following the principle on which alphabets get based. These rules will map letters of the alphabet to the phonemes of the spoken language. In a perfectly phonemic orthography, there would be a consistent one-to-one correspondence between the letters and the phonemes so that a writer could predict the spelling of a word given its pronunciation, and a speaker would always know the pronunciation of a word given its spelling, and vice versa. However, this ideal is usually never achieved in practice. Languages can come close to it, such as Spanish and Finnish. others, such as English, deviate from it to a much larger degree.
The pronunciation of a language often evolves independently of its writing system. Writing systems have been borrowed for languages the orthography was not initially made to use. The degree to which letters of an alphabet correspond to phonemes of a language varies.
Languages may fail to achieve a one-to-one correspondence between letters and sounds in any of several ways:
A language may represent a given phoneme by combinations of letters rather than just a single letter. Two-letter combinations are called digraphs, and three-letter groups are called trigraphs. German uses the tetragraphs (four letters) "tsch" for the phoneme and (in a few borrowed words) "dsch" for . Kabardian also uses a tetragraph for one of its phonemes, namely "кхъу." Two letters representing one sound occur in several instances in Hungarian as well (where, for instance, cs stands for [tʃ], sz for [s], zs for [ʒ], dzs for [dʒ]).
A language may represent the same phoneme with two or more different letters or combinations of letters. An example is modern Greek which may write the phoneme in six different ways: , , , , , and .
A language may spell some words with unpronounced letters that exist for historical or other reasons. For example, the spelling of the Thai word for "beer" [เบียร์] retains a letter for the final consonant "r" present in the English word it borrows, but silences it.
Pronunciation of individual words may change according to the presence of surrounding words in a sentence, for example, in Sandhi.
Different dialects of a language may use different phonemes for the same word.
A language may use different sets of symbols or rules for distinct vocabulary items, typically for foreign words, such as in the Japanese katakana syllabary is used for foreign words, and there are rules in English for using loanwords from other languages.
National languages sometimes elect to address the problem of dialects by associating the alphabet with the national standard. Some national languages like Finnish, Armenian, Turkish, Russian, Serbo-Croatian (Serbian, Croatian, and Bosnian), and Bulgarian have a very regular spelling system with nearly one-to-one correspondence between letters and phonemes. Similarly, the Italian verb corresponding to 'spell (out),' compitare, is unknown to many Italians because spelling is usually trivial, as Italian spelling is highly phonemic. In standard Spanish, one can tell the pronunciation of a word from its spelling, but not vice versa, as phonemes sometimes can be represented in more than one way, but a given letter is consistently pronounced. French using silent letters, nasal vowels, and elision, may seem to lack much correspondence between the spelling and pronunciation. However, its rules on pronunciation, though complex, are consistent and predictable with a fair degree of accuracy.
At the other extreme are languages such as English, where pronunciations mostly have to be memorized as they do not correspond to the spelling consistently. For English, this is because the Great Vowel Shift occurred after the orthography got established and because English has acquired a large number of loanwords at different times, retaining their original spelling at varying levels. However, even English has general, albeit complex, rules that predict pronunciation from spelling. Rules like this are usually successful. However, rules to predict spelling from pronunciation have a higher failure rate.
Sometimes, countries have the written language undergo a spelling reform to realign the writing with the contemporary spoken language. These can range from simple spelling changes and word forms to switching the entire writing system. For example, Turkey switched from the Arabic alphabet to a Latin-based Turkish alphabet, and when Kazakh changed from an Arabic script to a Cyrillic script due to the Soviet Union's influence, and in 2021, it made a transition to the Latin alphabet, similar to Turkish. The Cyrillic script used to be official in Uzbekistan and Turkmenistan before they all switched to the Latin alphabet, including Uzbekistan that is having a reform of the alphabet to use diacritics on the letters that are marked by apostrophes and the letters that are digraphs.
The standard system of symbols used by linguists to represent sounds in any language, independently of orthography, is called the International Phonetic Alphabet.
See also
Abecedarium
Acrophony
Akshara
Alphabet book
Alphabet effect
Alphabet song
Alphabetical order
Butterfly Alphabet
Character encoding
Constructed script
Fingerspelling
NATO phonetic alphabet
Lipogram
List of writing systems
Pangram
Thoth
Transliteration
Unicode
References
Bibliography
Overview of modern and some ancient writing systems.
Chapter 3 traces and summarizes the invention of alphabetic writing.
Chapter 4 traces the invention of writing
Further reading
Josephine Quinn, "Alphabet Politics" (review of Silvia Ferrara, The Greatest Invention: A History of the World in Nine Mysterious Scripts, translated from the Italian by Todd Portnowitz, Farrar, Straus and Giroux, 2022, 289 pp.; and Johanna Drucker, Inventing the Alphabet: The Origins of Letters from Antiquity to the Present, University of Chicago Press, 2022, 380 pp.), The New York Review of Books, vol. LXX, no. 1 (19 January 2023), pp. 6, 8, 10.
External links
The Origins of abc
"Language, Writing and Alphabet: An Interview with Christophe Rico", Damqātum 3 (2007)
Michael Everson's Alphabets of Europe
Evolution of alphabets, animation by Prof. Robert Fradkin at the University of Maryland
How the Alphabet Was Born from Hieroglyphs—Biblical Archaeology Review
An Early Hellenic Alphabet
Museum of the Alphabet
The Alphabet, BBC Radio 4 discussion with Eleanor Robson, Alan Millard and Rosalind Thomas (In Our Time, 18 December 2003)
Orthography
|
https://en.wikipedia.org/wiki/Adobe
|
Adobe ( ; ) is a building material made from earth and organic materials. is Spanish for mudbrick. In some English-speaking regions of Spanish heritage, such as the Southwestern United States, the term is used to refer to any kind of earthen construction, or various architectural styles like Pueblo Revival or Territorial Revival. Most adobe buildings are similar in appearance to cob and rammed earth buildings. Adobe is among the earliest building materials, and is used throughout the world.
Adobe architecture has been dated to before 5,100 B.C.
Description
Adobe bricks are rectangular prisms small enough that they can quickly air dry individually without cracking. They can be subsequently assembled, with the application of adobe mud to bond the individual bricks into a structure. There is no standard size, with substantial variations over the years and in different regions. In some areas a popular size measured weighing about ; in other contexts the size is weighing about . The maximum sizes can reach up to ; above this weight it becomes difficult to move the pieces, and it is preferred to ram the mud in situ, resulting in a different typology known as rammed earth.
Strength
In dry climates, adobe structures are extremely durable, and account for some of the oldest existing buildings in the world. Adobe buildings offer significant advantages due to their greater thermal mass, but they are known to be particularly susceptible to earthquake damage if they are not reinforced. Cases where adobe structures were widely damaged during earthquakes include the 1976 Guatemala earthquake, the 2003 Bam earthquake, and the 2010 Chile earthquake.
Distribution
Buildings made of sun-dried earth are common throughout the world (Middle East, Western Asia, North Africa, West Africa, South America, Southwestern North America, Southwestern and Eastern Europe.). Adobe had been in use by indigenous peoples of the Americas in the Southwestern United States, Mesoamerica, and the Andes for several thousand years. Puebloan peoples built their adobe structures with handsful or basketsful of adobe, until the Spanish introduced them to making bricks. Adobe bricks were used in Spain from the Late Bronze and Iron Ages (eighth century BCE onwards). Its wide use can be attributed to its simplicity of design and manufacture, and economics.
Etymology
The word adobe has existed for around 4,000 years with relatively little change in either pronunciation or meaning. The word can be traced from the Middle Egyptian () word ḏbt "mud brick" (with vowels unwritten). Middle Egyptian evolved into Late Egyptian and finally to Coptic (), where it appeared as ⲧⲱⲃⲉ tōbə. This was adopted into Arabic as aṭ-ṭawbu or aṭ-ṭūbu, with the definite article al- attached to the root tuba. This was assimilated into the Old Spanish language as adobe , probably via Mozarabic. English borrowed the word from Spanish in the early 18th century, still referring to mudbrick construction.
In more modern English usage, the term adobe has come to include a style of architecture popular in the desert climates of North America, especially in New Mexico, regardless of the construction method.
Composition
An adobe brick is a composite material made of earth mixed with water and an organic material such as straw or dung. The soil composition typically contains sand, silt and clay. Straw is useful in binding the brick together and allowing the brick to dry evenly, thereby preventing cracking due to uneven shrinkage rates through the brick. Dung offers the same advantage. The most desirable soil texture for producing the mud of adobe is 15% clay, 10–30% silt, and 55–75% fine sand. Another source quotes 15–25% clay and the remainder sand and coarser particles up to cobbles , with no deleterious effect. Modern adobe is stabilized with either emulsified asphalt or Portland cement up to 10% by weight.
No more than half the clay content should be expansive clays, with the remainder non-expansive illite or kaolinite. Too much expansive clay results in uneven drying through the brick, resulting in cracking, while too much kaolinite will make a weak brick. Typically the soils of the Southwest United States, where such construction has been widely used, are an adequate composition.
Material properties
Adobe walls are load bearing, i.e. they carry their own weight into the foundation rather than by another structure, hence the adobe must have sufficient compressive strength. In the United States, most building codes call for a minimum compressive strength of 300 lbf/in2 (2.07 newton/mm2) for the adobe block. Adobe construction should be designed so as to avoid lateral structural loads that would cause bending loads. The building codes require the building sustain a 1 g lateral acceleration earthquake load. Such an acceleration will cause lateral loads on the walls, resulting in shear and bending and inducing tensile stresses. To withstand such loads, the codes typically call for a tensile modulus of rupture strength of at least 50 lbf/in2 (0.345 newton/mm2) for the finished block.
In addition to being an inexpensive material with a small resource cost, adobe can serve as a significant heat reservoir due to the thermal properties inherent in the massive walls typical in adobe construction. In climates typified by hot days and cool nights, the high thermal mass of adobe mediates the high and low temperatures of the day, moderating the temperature of the living space. The massive walls require a large and relatively long input of heat from the sun (radiation) and from the surrounding air (convection) before they warm through to the interior. After the sun sets and the temperature drops, the warm wall will continue to transfer heat to the interior for several hours due to the time-lag effect. Thus, a well-planned adobe wall of the appropriate thickness is very effective at controlling inside temperature through the wide daily fluctuations typical of desert climates, a factor which has contributed to its longevity as a building material.
Thermodynamic material properties have significant variation in the literature. Some experiments suggest that the standard consideration of conductivity is not adequate for this material, as its main thermodynamic property is inertia, and conclude that experimental tests should be performed over a longer period of time than usual - preferably with changing thermal jumps. There is an effective R-value for a north facing 10-in wall of R0=10 hr ft2 °F/Btu, which corresponds to thermal conductivity k=10 in x 1 ft/12 in /R0=0.33 Btu/(hr ft °F) or 0.57 W/(m K) in agreement with the thermal conductivity reported from another source. To determine the total R-value of a wall, scale R0 by the thickness of the wall in inches. The thermal resistance of adobe is also stated as an R-value for a 10-inch wall R0=4.1 hr ft2 °F/Btu. Another source provides the following properties: conductivity=0.30 Btu/(hr ft °F) or 0.52 W/(m K); specific heat capacity=0.24 Btu/(lb °F) or 1 kJ/(kg K) and density=106 lb/ft3 or 1700 kg/m3, giving heat capacity=25.4 Btu/(ft3 °F) or 1700 kJ/(m3 K). Using the average value of the thermal conductivity as k = 32 Btu/(hr ft °F) or 0.55 W/(m K), the thermal diffusivity is calculated to be 0.013 ft2/h or 3.3x10−7 m2/s.
Uses
Poured and puddled adobe walls
Poured and puddled adobe (puddled clay, piled earth), today called cob, is made by placing soft adobe in layers, rather than by making individual dried bricks or using a form. "Puddle" is a general term for a clay or clay and sand-based material worked into a dense, plastic state. These are the oldest methods of building with adobe in the Americas until holes in the ground were used as forms, and later wooden forms used to make individual bricks were introduced by the Spanish.
Adobe bricks
Bricks made from adobe are usually made by pressing the mud mixture into an open timber frame. In North America, the brick is typically about in size. The mixture is molded into the frame, which is removed after initial setting. After drying for a few hours, the bricks are turned on edge to finish drying. Slow drying in shade reduces cracking.
The same mixture, without straw, is used to make mortar and often plaster on interior and exterior walls. Some cultures used lime-based cement for the plaster to protect against rain damage.
Depending on the form into which the mixture is pressed, adobe can encompass nearly any shape or size, provided drying is even and the mixture includes reinforcement for larger bricks. Reinforcement can include manure, straw, cement, rebar, or wooden posts. Straw, cement, or manure added to a standard adobe mixture can produce a stronger, more crack-resistant brick. A test is done on the soil content first. To do so, a sample of the soil is mixed into a clear container with some water, creating an almost completely saturated liquid. The container is shaken vigorously for one minute. It is then allowed to settle for a day until the soil has settled into layers. Heavier particles settle out first, sand above, silt above that, and very fine clay and organic matter will stay in suspension for days. After the water has cleared, percentages of the various particles can be determined. Fifty to 60 percent sand and 35 to 40 percent clay will yield strong bricks. The Cooperative State Research, Education, and Extension Service at New Mexico State University recommends a mix of not more than clay, not less than sand, and never more than silt.
During the Great Depression, designer and builder Hugh W. Comstock used cheaper materials and made a specialized adobe brick called "Bitudobe." His first adobe house was built in 1936. In 1948, he published the book Post-Adobe; Simplified Adobe Construction Combining A Rugged Timber Frame And Modern Stabilized Adobe, which described his method of construction, including how to make "Bitudobe." In 1938, he served as an adviser to the architects Franklin & Kump Associates, who built the Carmel High School, which used his Post-adobe system.
Adobe wall construction
The ground supporting an adobe structure should be compressed, as the weight of adobe wall is significant and foundation settling may cause cracking of the wall. Footing depth is to be below the ground frost level. The footing and stem wall are commonly 24 and 14 inches thick, respectively. Modern construction codes call for the use of reinforcing steel in the footing and stem wall. Adobe bricks are laid by course. Adobe walls usually never rise above two stories as they are load bearing and adobe has low structural strength. When creating window and door openings, a lintel is placed on top of the opening to support the bricks above. Atop the last courses of brick, bond beams made of heavy wood beams or modern reinforced concrete are laid to provide a horizontal bearing plate for the roof beams and to redistribute lateral earthquake loads to shear walls more able to carry the forces. To protect the interior and exterior adobe walls, finishes such as mud plaster, whitewash or stucco can be applied. These protect the adobe wall from water damage, but need to be reapplied periodically. Alternatively, the walls can be finished with other nontraditional plasters that provide longer protection. Bricks made with stabilized adobe generally do not need protection of plasters.
Adobe roof
The traditional adobe roof has been constructed using a mixture of soil/clay, water, sand and organic materials. The mixture was then formed and pressed into wood forms, producing rows of dried earth bricks that would then be laid across a support structure of wood and plastered into place with more adobe.
Depending on the materials available, a roof may be assembled using wood or metal beams to create a framework to begin layering adobe bricks. Depending on the thickness of the adobe bricks, the framework has been preformed using a steel framing and a layering of a metal fencing or wiring over the framework to allow an even load as masses of adobe are spread across the metal fencing like cob and allowed to air dry accordingly. This method was demonstrated with an adobe blend heavily impregnated with cement to allow even drying and prevent cracking.
The more traditional flat adobe roofs are functional only in dry climates that are not exposed to snow loads. The heaviest wooden beams, called vigas, lie atop the wall. Across the vigas lie smaller members called latillas and upon those brush is then laid. Finally, the adobe layer is applied.
To construct a flat adobe roof, beams of wood were laid to span the building, the ends of which were attached to the tops of the walls. Once the vigas, latillas and brush are laid, adobe bricks are placed. An adobe roof is often laid with bricks slightly larger in width to ensure a greater expanse is covered when placing the bricks onto the roof. Following each individual brick should be a layer of adobe mortar, recommended to be at least thick to make certain there is ample strength between the brick's edges and also to provide a relative moisture barrier during rain.
Roof design evolved around 1850 in the American Southwest. Three inches of adobe mud was applied on top of the latillas, then 18 inches of dry adobe dirt applied to the roof. The dirt was contoured into a low slope to a downspout aka a 'canal'. When moisture was applied to the roof the clay particles expanded to create a waterproof membrane. Once a year it was necessary to pull the weeds from the roof and re-slope the dirt as needed.
Depending on the materials, adobe roofs can be inherently fire-proof. The construction of a chimney can greatly influence the construction of the roof supports, creating an extra need for care in choosing the materials. The builders can make an adobe chimney by stacking simple adobe bricks in a similar fashion as the surrounding walls.
In 1927, the Uniform Building Code (UBC) was adopted in the United States. Local ordinances, referencing the UBC added requirements to building with adobe. These included: restriction of building height of adobe structures to 1-story, requirements for adobe mix (compressive and shear strength) and new requirements which stated that every building shall be designed to withstand seismic activity, specifically lateral forces. By the 1980s however, seismic related changes in the California Building Code effectively ended solid wall adobe construction in California; however Post-and-Beam adobe and veneers are still being used.
Adobe around the world
The largest structure ever made from adobe is the Arg-é Bam built by the Achaemenid Empire. Other large adobe structures are the Huaca del Sol in Peru, with 100 million signed bricks and the ciudellas of Chan Chan and Tambo Colorado, both in Peru.
See also
used adobe walls
(waterproofing plaster)
(also known as Ctesiphon Arch) in Iraq is the largest mud brick arch in the world, built beginning in 540 AD
References
External links
Soil-based building materials
Masonry
Adobe buildings and structures
Appropriate technology
Vernacular architecture
Sustainable building
Buildings and structures by construction material
Western (genre) staples and terminology
|
https://en.wikipedia.org/wiki/Ampere
|
The ampere ( , ; symbol: A), often shortened to amp, is the unit of electric current in the International System of Units (SI). One ampere is equal to 1 coulomb moving past a point in 1 second, or electrons' worth of charge moving past a point in 1 second. It is named after French mathematician and physicist André-Marie Ampère (1775–1836), considered the father of electromagnetism along with Danish physicist Hans Christian Ørsted.
As of the 2019 redefinition of the SI base units, the ampere is defined by fixing the elementary charge to be exactly (coulomb), which means an ampere is an electric current equivalent to elementary charges moving every seconds or elementary charges moving in a second. Prior to the redefinition the ampere was defined as the current passing through 2 parallel wires 1 metre apart that produces a magnetic force of newtons per metre.
The earlier CGS system has two units of current, one structured similar to the SI's and the other using Coulomb's law as a fundamental relationship, with the unit of charge defined by measuring the force between two charged metal plates. The unit of current is then defined as one unit of charge per second. In SI, the unit of charge, the coulomb, is defined as the charge carried by one ampere during one second.
History
The ampere is named for French physicist and mathematician André-Marie Ampère (1775–1836), who studied electromagnetism and laid the foundation of electrodynamics. In recognition of Ampère's contributions to the creation of modern electrical science, an international convention, signed at the 1881 International Exposition of Electricity, established the ampere as a standard unit of electrical measurement for electric current.
The ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the units derived from it in the MKSA system would be conveniently sized.
The "international ampere" was an early realization of the ampere, defined as the current that would deposit of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is .
Since power is defined as the product of current and voltage, the ampere can alternatively be expressed in terms of the other units using the relationship , and thus 1 A = 1 W/V. Current can be measured by a multimeter, a device that can measure electrical voltage, current, and resistance.
Former definition in the SI
Until 2019, the SI defined the ampere as follows:
The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to newtons per metre of length.
Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere.
The SI unit of charge, the coulomb, was then defined as "the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second:
In general, charge was determined by steady current flowing for a time as .
This definition of the ampere was most accurately realised using a Kibble balance, but in practice the unit was maintained via Ohm's law from the units of electromotive force and resistance, the volt and the ohm, since the latter two could be tied to physical phenomena that are relatively easy to reproduce, the Josephson effect and the quantum Hall effect, respectively.
Techniques to establish the realisation of an ampere had a relative uncertainty of approximately a few parts in 10, and involved realisations of the watt, the ohm and the volt.
Present definition
The 2019 redefinition of the SI base units defined the ampere by taking the fixed numerical value of the elementary charge to be when expressed in the unit C, which is equal to A⋅s, where the second is defined in terms of , the unperturbed ground state hyperfine transition frequency of the caesium-133 atom.
The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second:
In general, charge is determined by steady current flowing for a time as .
Constant, instantaneous and average current are expressed in amperes (as in "the charging current is 1.2 A") and the charge accumulated (or passed through a circuit) over a period of time is expressed in coulombs (as in "the battery charge is "). The relation of the ampere (C/s) to the coulomb is the same as that of the watt (J/s) to the joule.
Units derived from the ampere
The international system of units (SI) is based on 7 SI base units the second, metre, kilogram, kelvin, ampere, mole, and candela representing 7 fundamental types of physical quantity, or "dimensions", (time, length, mass, temperature, electric current, amount of substance, and luminous intensity respectively) with all other SI units being defined using these. These SI derived units can either be given special names e.g. watt, volt, lux, etc. or defined in terms of others, e.g. metre per second. The units with special names derived from the ampere are:
There are also some SI units that are frequently used in the context of electrical engineering and electrical appliances, but can be defined independently of the ampere, notably the hertz, joule, watt, candela, lumen, and lux.
SI prefixes
Like other SI units, the ampere can be modified by adding a prefix that multiplies it by a power of 10.
See also
Ammeter
Ampacity (current-carrying capacity)
Electric current
Electric shock
Hydraulic analogy
Magnetic constant
Orders of magnitude (current)
References
External links
The NIST Reference on Constants, Units, and Uncertainty
NIST Definition of ampere and μ0
SI base units
Units of electric current
|
https://en.wikipedia.org/wiki/Mouthwash
|
Mouthwash, mouth rinse, oral rinse, or mouth bath is a liquid which is held in the mouth passively or swirled around the mouth by contraction of the perioral muscles and/or movement of the head, and may be gargled, where the head is tilted back and the liquid bubbled at the back of the mouth.
Usually mouthwashes are antiseptic solutions intended to reduce the microbial load in the mouth, although other mouthwashes might be given for other reasons such as for their analgesic, anti-inflammatory or anti-fungal action. Additionally, some rinses act as saliva substitutes to neutralize acid and keep the mouth moist in xerostomia (dry mouth). Cosmetic mouthrinses temporarily control or reduce bad breath and leave the mouth with a pleasant taste.
Rinsing with water or mouthwash after brushing with a fluoride toothpaste can reduce the availability of salivary fluoride. This can lower the anti-cavity re-mineralization and antibacterial effects of fluoride. Fluoridated mouthwash may mitigate this effect or in high concentrations increase available fluoride, but is not as cost-effective as leaving the fluoride toothpaste on the teeth after brushing. A group of experts discussing post brushing rinsing in 2012 found that although there was clear guidance given in many public health advice publications to "spit, avoid rinsing with water/excessive rinsing with water" they believed there was a limited evidence base for best practice.
Use
Common use involves rinsing the mouth with about 20–50 ml (2/3 fl oz) of mouthwash. The wash is typically swished or gargled for about half a minute and then spat out. Most companies suggest not drinking water immediately after using mouthwash. In some brands, the expectorate is stained, so that one can see the bacteria and debris.
Mouthwash should not be used immediately after brushing the teeth so as not to wash away the beneficial fluoride residue left from the toothpaste. Similarly, the mouth should not be rinsed out with water after brushing. Patients were told to "spit don't rinse" after toothbrushing as part of a National Health Service campaign in the UK. A fluoride mouthrinse can be used at a different time of the day to brushing.
Gargling is where the head is tilted back, allowing the mouthwash to sit in the back of the mouth while exhaling, causing the liquid to bubble. Gargling is practiced in Japan for perceived prevention of viral infection. One commonly used way is with infusions or tea. In some cultures, gargling is usually done in private, typically in a bathroom at a sink so the liquid can be rinsed away.
Dangerous misuse
If one drinks mouthwash, serious harm and even death can quickly result from the high alcohol content and other harmful substances in mouthwash. It is a common cause of death among homeless people during winter months, because a person can feel warmer after drinking it.
Effects
The most-commonly-used mouthwashes are commercial antiseptics, which are used at home as part of an oral hygiene routine. Mouthwashes combine ingredients to treat a variety of oral conditions. Variations are common, and mouthwash has no standard formulation, so its use and recommendation involves concerns about patient safety. Some manufacturers of mouthwash state that their antiseptic and antiplaque mouthwashes kill the bacterial plaque that causes cavities, gingivitis, and bad breath. It is, however, generally agreed that the use of mouthwash does not eliminate the need for both brushing and flossing. The American Dental Association asserts that regular brushing and proper flossing are enough in most cases, in addition to regular dental check-ups, although they approve many mouthwashes.
For many patients, however, the mechanical methods could be tedious and time-consuming, and, additionally, some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthwashes, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor.
Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse, as they dry out the mouth. Soreness, ulceration and redness may sometimes occur (e.g., aphthous stomatitis or allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients, such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. saltwater), or foregoing mouthwash entirely.
Prescription mouthwashes are used prior to and after oral surgery procedures, such as tooth extraction, or to treat the pain associated with mucositis caused by radiation therapy or chemotherapy. They are also prescribed for aphthous ulcers, other oral ulcers, and other mouth pain. "Magic mouthwashes" are prescription mouthwashes compounded in a pharmacy from a list of ingredients specified by a doctor. Despite a lack of evidence that prescription mouthwashes are more effective in decreasing the pain of oral lesions, many patients and prescribers continue to use them. There has been only one controlled study to evaluate the efficacy of magic mouthwash; it shows no difference in efficacy between the most common magic-mouthwash formulation, on the one hand, and commercial mouthwashes (such as chlorhexidine) or a saline/baking soda solution, on the other. Current guidelines suggest that saline solution is just as effective as magic mouthwash in pain relief and in shortening the healing time of oral mucositis from cancer therapies.
History
The first known references to mouth rinsing is in Ayurveda for treatment of gingivitis. Later, in the Greek and Roman periods, mouth rinsing following mechanical cleansing became common among the upper classes, and Hippocrates recommended a mixture of salt, alum, and vinegar. The Jewish Talmud, dating back about 1,800 years, suggests a cure for gum ailments containing "dough water" and olive oil. The ancient Chinese had also gargled salt water, tea and wine as a form of mouthwash after meals, due to the antiseptic properties of those liquids.
Before Europeans came to the Americas, Native North American and Mesoamerican cultures used mouthwashes, often made from plants such as Coptis trifolia. Indeed, Aztec dentistry was more advanced than European dentistry of the age. Peoples of the Americas used salt water mouthwashes for sore throats, and other mouthwashes for problems such as teething and mouth ulcers.
Anton van Leeuwenhoek, the famous 17th century microscopist, discovered living organisms (living, because they were mobile) in deposits on the teeth (what we now call dental plaque). He also found organisms in water from the canal next to his home in Delft. He experimented with samples by adding vinegar or brandy and found that this resulted in the immediate immobilization or killing of the organisms suspended in water. Next he tried rinsing the mouth of himself and somebody else with a mouthwash containing vinegar or brandy and found that living organisms remained in the dental plaque. He concluded—correctly—that the mouthwash either did not reach, or was not present long enough, to kill the plaque organisms.
In 1892, German Richard Seifert invented mouthwash product Odol, which was produced by company founder Karl August Lingner (1861–1916) in Dresden.
That remained the state of affairs until the late 1960s when Harald Loe (at the time a professor at the Royal Dental College in Aarhus, Denmark) demonstrated that a chlorhexidine compound could prevent the build-up of dental plaque. The reason for chlorhexidine's effectiveness is that it strongly adheres to surfaces in the mouth and thus remains present in effective concentrations for many hours.
Since then commercial interest in mouthwashes has been intense and several newer products claim effectiveness in reducing the build-up in dental plaque and the associated severity of gingivitis, in addition to fighting bad breath. Many of these solutions aim to control the volatile sulfur compound–creating anaerobic bacteria that live in the mouth and excrete substances that lead to bad breath and unpleasant mouth taste. For example, the number of mouthwash variants in the United States of America has grown from 15 (1970) to 66 (1998) to 113 (2012).
Research
Research in the field of microbiotas shows that only a limited set of microbes cause tooth decay, with most of the bacteria in the human mouth being harmless. Focused attention on cavity-causing bacteria such as Streptococcus mutans has led research into new mouthwash treatments that prevent these bacteria from initially growing. While current mouthwash treatments must be used with a degree of frequency to prevent this bacteria from regrowing, future treatments could provide a viable long-term solution.
A clinical trial and laboratory studies have shown that alcohol-containing mouthwash could reduce the growth of Neisseria gonorrhoeae in the pharynx. However, subsequent trials have found that there was no difference in gonorrhoea cases among men using daily mouthwash compared to those who did not use mouthwash for 12 weeks.
Ingredients
Alcohol
Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol, which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide "bite". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing, although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or, indeed, be the sole cause of halitosis in other individuals.
It is hypothesized that alcohol in mouthwashes acts as a carcinogen (cancer-inducing agent). Generally, there is no scientific consensus about this. One review stated:
The same researchers also state that the risk of acquiring oral cancer rises almost five times for users of alcohol-containing mouthwash who neither smoke nor drink (with a higher rate of increase for those who do). In addition, the authors highlight side effects from several mainstream mouthwashes that included dental erosion and accidental poisoning of children. The review garnered media attention and conflicting opinions from other researchers. Yinka Ebo of Cancer Research UK disputed the findings, concluding that "there is still not enough evidence to suggest that using mouthwash that contains alcohol will increase the risk of mouth cancer". Studies conducted in 1985, 1995, 2003, and 2012 did not support an association between alcohol-containing mouth rinses and oral cancer. Andrew Penman, chief executive of The Cancer Council New South Wales, called for further research on the matter. In a March 2009 brief, the American Dental Association said "the available evidence does not support a connection between oral cancer and alcohol-containing mouthrinse". Many newer brands of mouthwash are alcohol-free, not just in response to consumer concerns about oral cancer, but also to cater for religious groups who abstain from alcohol consumption.
Benzydamine (analgesic)
In painful oral conditions such as aphthous stomatitis, analgesic mouthrinses (e.g. benzydamine mouthwash, or "Difflam") are sometimes used to ease pain, commonly used before meals to reduce discomfort while eating.
Benzoic acid
Benzoic acid acts as a buffer.
Betamethasone
Betamethasone is sometimes used as an anti-inflammatory, corticosteroid mouthwash. It may be used for severe inflammatory conditions of the oral mucosa such as the severe forms of aphthous stomatitis.
Cetylpyridinium chloride (antiseptic, antimalodor)
Cetylpyridinium chloride containing mouthwash (e.g. 0.05%) is used in some specialized mouthwashes for halitosis. Cetylpyridinium chloride mouthwash has less anti-plaque effect than chlorhexidine and may cause staining of teeth, or sometimes an oral burning sensation or ulceration.
Chlorhexidine digluconate and hexetidine (antiseptic)
Chlorhexidine digluconate is a chemical antiseptic and is used in a 0.05–0.2% solution as a mouthwash. There is no evidence to support that higher concentrations are more effective in controlling dental plaque and gingivitis. A randomized clinical trial conducted in Rabat University in Morocco found better results in plaque inhibition when chlorohexidine with alcohol base 0.12% was used, when compared to an alcohol-free 0.1% chlorhexidine mouthrinse.
Chlorhexidine has good substantivity (the ability of a mouthwash to bind to hard and soft tissues in the mouth). It has anti-plaque action, and also some anti-fungal action. It is especially effective against Gram-negative rods. The proportion of Gram-negative rods increase as gingivitis develops, so it is also used to reduce gingivitis. It is sometimes used as an adjunct to prevent dental caries and to treat periodontal disease, although it does not penetrate into periodontal pockets well. Chlorhexidine mouthwash alone is unable to prevent plaque, so it is not a substitute for regular toothbrushing and flossing. Instead, chlorhexidine mouthwash is more effective when used as an adjunctive treatment with toothbrushing and flossing. In the short term, if toothbrushing is impossible due to pain, as may occur in primary herpetic gingivostomatitis, chlorhexidine mouthwash is used as a temporary substitute for other oral hygiene measures. It is not suited for use in acute necrotizing ulcerative gingivitis, however. Rinsing with chlorhexidine mouthwash before and after a tooth extraction may reduce the risk of a dry socket. Other uses of chlorhexidine mouthwash include prevention of oral candidiasis in immunocompromised persons, treatment of denture-related stomatitis, mucosal ulceration/erosions and oral mucosal lesions, general burning sensation and many other uses.
Chlorhexidine mouthwash is known to have minor adverse effects. Chlorhexidine binds to tannins, meaning that prolonged use in persons who consume coffee, tea or red wine is associated with extrinsic staining (i.e. removable staining) of teeth. A systematic review of commercial chlorhexidine products with anti-discoloration systems (ADSs) found that the ADSs were able to reduce tooth staining without affecting the beneficial effects of chlorhexidine. Chlorhexidine mouthwash can also cause taste disturbance or alteration. Chlorhexidine is rarely associated with other issues like overgrowth of enterobacteria in persons with leukemia, desquamation, irritation, and stomatitis of oral mucosa, salivary gland pain and swelling, and hypersensitivity reactions including anaphylaxis.
Hexetidine also has anti-plaque, analgesic, astringent and anti-malodor properties, but is considered an inferior alternative to chlorhexidine.
Edible oils
In traditional Ayurvedic medicine, the use of oil mouthwashes is called "Kavala" ("oil swishing") or "Gandusha", and this practice has more recently been re-marketed by the complementary and alternative medicine industry as "oil pulling". Its promoters claim it works by "pulling out" "toxins", which are known as ama in Ayurvedic medicine, and thereby reducing inflammation. Ayurvedic literature claims that oil pulling is capable of improving oral and systemic health, including a benefit in conditions such as headaches, migraines, diabetes mellitus, asthma, and acne, as well as whitening teeth.
Oil pulling has received little study and there is little evidence to support claims made by the technique's advocates. When compared with chlorhexidine in one small study, it was found to be less effective at reducing oral bacterial load, and the other health claims of oil pulling have failed scientific verification or have not been investigated. There is a report of lipid pneumonia caused by accidental inhalation of the oil during oil pulling.
The mouth is rinsed with approximately one tablespoon of oil for 10–20 minutes then spat out. Sesame oil, coconut oil and ghee are traditionally used, but newer oils such as sunflower oil are also used.
Essential oils
Phenolic compounds and monoterpenes include essential oil constituents that have some antibacterial properties, such as eucalyptol, eugenol, hinokitiol, menthol, phenol, or thymol.
Essential oils are oils which have been extracted from plants. Mouthwashes based on essential oils could be more effective than traditional mouthcare as anti-gingival treatments. They have been found effective in reducing halitosis, and are being used in several commercial mouthwashes.
Fluoride (anticavity)
Anti-cavity mouthwashes use sodium fluoride to protect against tooth decay. Fluoride-containing mouthwashes are used as prevention for dental caries for individuals who are considered at higher risk for tooth decay, whether due to xerostomia related to salivary dysfunction or side effects of medication, to not drinking fluoridated water, or to being physically unable to care for their oral needs (brushing and flossing), and as treatment for those with dentinal hypersensitivity, gingival recession/ root exposure.
Flavoring agents and Xylitol
Flavoring agents include sweeteners such as sorbitol, sucralose, sodium saccharin, and xylitol, which stimulate salivary function due to their sweetness and taste and helps restore the mouth to a neutral level of acidity.
Xylitol rinses double as a bacterial inhibitor, and have been used as substitute for alcohol to avoid dryness of mouth associated with alcohol.
Hydrogen peroxide
Hydrogen peroxide can be used as an oxidizing mouthwash (e.g. Peroxyl, 1.5%). It kills anaerobic bacteria, and also has a mechanical cleansing action when it froths as it comes into contact with debris in mouth. It is often used in the short term to treat acute necrotising ulcerative gingivitis. Side effects can occur with prolonged use, including hypertrophy of the lingual papillae.
Lactoperoxidase (saliva substitute)
Enzymes and non-enzymatic proteins, such as lactoperoxidase, lysozyme, and lactoferrin, have been used in mouthwashes (e.g., Biotene) to reduce levels of oral bacteria, and, hence, of the acids produced by these bacteria.
Lidocaine/xylocaine
Oral lidocaine is useful for the treatment of mucositis symptoms (inflammation of mucous membranes) induced by radiation or chemotherapy. There is evidence that lidocaine anesthetic mouthwash has the potential to be systemically absorbed, when it was tested in patients with oral mucositis who underwent a bone marrow transplant.
Methyl salicylate
Methyl salicylate functions as an antiseptic, antiinflammatory, and analgesic agent, a flavoring, and a fragrance. Methyl salicylate has some anti-plaque action, but less than chlorhexidine. Methyl salicylate does not stain teeth.
Nystatin
Nystatin suspension is an antifungal ingredient used for the treatment of oral candidiasis.
Potassium oxalate
A randomized clinical trial found promising results in controlling and reducing dentine hypersensitivity when potassium oxalate mouthwash was used in conjugation with toothbrushing.
Povidone/iodine (PVP-I)
A 2005 study found that gargling three times a day with simple water or with a povidone-iodine solution was effective in preventing upper respiratory infection and decreasing the severity of symptoms if contracted. Other sources attribute the benefit to a simple placebo effect.
PVP-I in general covers "a wider virucidal spectrum, covering both enveloped and nonenveloped viruses, than the other commercially available antiseptics", which also includes the novel SARS-CoV-2 Virus.
Sanguinarine
Sanguinarine-containing mouthwashes are marketed as anti-plaque and anti-malodor treatments. Sanguinarine is a toxic alkaloid herbal extract, obtained from plants such as Sanguinaria canadensis (bloodroot), Argemone mexicana (Mexican prickly poppy), and others. However, its use is strongly associated with the development of leukoplakia (a white patch in the mouth), usually in the buccal sulcus. This type of leukoplakia has been termed "sanguinaria-associated keratosis", and more than 80% of people with leukoplakia in the vestibule of the mouth have used this substance. Upon stopping contact with the causative substance, the lesions may persist for years. Although this type of leukoplakia may show dysplasia, the potential for malignant transformation is unknown. Ironically, elements within the complementary and alternative medicine industry promote the use of sanguinaria as a therapy for cancer.
Sodium bicarbonate (baking soda)
Sodium bicarbonate is sometimes combined with salt to make a simple homemade mouthwash, indicated for any of the reasons that a saltwater mouthwash might be used. Pre-mixed mouthwashes of 1% sodium bicarbonate and 1.5% sodium chloride in aqueous solution are marketed, although pharmacists will easily be able to produce such a formulation from the base ingredients when required. Sodium bicarbonate mouthwash is sometimes used to remove viscous saliva and to aid visualization of the oral tissues during examination of the mouth.
Sodium chloride (salt)
Saline has a mechanical cleansing action and an antiseptic action, as it is a hypertonic solution in relation to bacteria, which undergo lysis. The heat of the solution produces a therapeutic increase in blood flow (hyperemia) to the surgical site, promoting healing. Hot saltwater mouthwashes also encourage the draining of pus from dental abscesses. In contrast, if heat is applied on the side of the face (e.g., hot water bottle) rather than inside the mouth, it may cause a dental abscess to drain extra-orally, which is later associated with an area of fibrosis on the face (see cutaneous sinus of dental origin).
Saltwater mouthwashes are also routinely used after oral surgery, to keep food debris out of healing wounds and to prevent infection. Some oral surgeons consider saltwater mouthwashes the mainstay of wound cleanliness after surgery. In dental extractions, hot saltwater mouthbaths should start about 24 hours after a dental extraction. The term mouth bath implies that the liquid is passively held in the mouth, rather than vigorously swilled around (which could dislodge a blood clot). Once the blood clot has stabilized, the mouthwash can be used more vigorously. These mouthwashes tend to be advised for use about 6 times per day, especially after meals (to remove food from the socket).
Sodium lauryl sulfate (foaming agent)
Sodium lauryl sulfate (SLS) is used as a foaming agent in many oral hygiene products, including many mouthwashes. Some may suggest that it is probably advisable to use mouthwash at least an hour after brushing with toothpaste when the toothpaste contains SLS, since the anionic compounds in the SLS toothpaste can deactivate cationic agents present in the mouthwash.
Sucralfate
Sucralfate is a mucosal coating agent, composed of an aluminum salt of sulfated sucrose. It is not recommended for use in the prevention of oral mucositis in head and neck cancer patients receiving radiotherapy or chemoradiation, due to a lack of efficacy found in a well-designed, randomized controlled trial.
Tetracycline (antibiotic)
Tetracycline is an antibiotic which may sometimes be used as a mouthwash in adults (it causes red staining of teeth in children). It is sometimes use for herpetiforme ulceration (an uncommon type of aphthous stomatitis), but prolonged use may lead to oral candidiasis, as the fungal population of the mouth overgrows in the absence of enough competing bacteria. Similarly, minocycline mouthwashes of 0.5% concentrations can relieve symptoms of recurrent aphthous stomatitis. Erythromycin is similar.
Tranexamic acid
A 4.8% tranexamic acid solution is sometimes used as an antifibrinolytic mouthwash to prevent bleeding during and after oral surgery in persons with coagulopathies (clotting disorders) or who are taking anticoagulants (blood thinners such as warfarin).
Triclosan
Triclosan is a non-ionic chlorinate bisphenol antiseptic found in some mouthwashes. When used in mouthwash (e.g. 0.03%), there is moderate substantivity, broad spectrum anti-bacterial action, some anti-fungal action, and significant anti-plaque effect, especially when combined with a copolymer or zinc citrate. Triclosan does not cause staining of the teeth. The safety of triclosan has been questioned.
Zinc
Astringents like zinc chloride provide a pleasant-tasting sensation and shrink tissues. Zinc, when used in combination with other antiseptic agents, can limit the buildup of tartar.
See also
Sodium fluoride/malic acid
Virucide
References
External links
Article on Bad-Breath Prevention Products – from MSNBC
Mayo Clinic Q&A on Magic Mouthwash for chemotherapy sores
American Dental Association article on mouthwash
Dentifrices
Oral hygiene
Drug delivery devices
Dosage forms
|
https://en.wikipedia.org/wiki/Asteroid
|
An asteroid is a minor planet—an object that is neither a true planet nor a comet—that orbits within the inner Solar System. They are rocky, metallic or icy bodies with no atmosphere. Sizes and shapes of asteroids vary significantly, ranging from 1-meter rocks to a dwarf planet almost 1000 km in diameter.
Of the roughly one million known asteroids the greatest number are located between the orbits of Mars and Jupiter, approximately 2 to 4 AU from the Sun, in the main asteroid belt. Asteroids are generally classified to be of three types: C-type, M-type, and S-type. These were named after and are generally identified with carbonaceous, metallic, and silicaceous compositions, respectively. The size of asteroids varies greatly; the largest, Ceres, is almost across and qualifies as a dwarf planet. The total mass of all the asteroids combined is only 3% that of Earth's Moon. The majority of main belt asteroids follow slightly elliptical, stable orbits, revolving in the same direction as the Earth and taking from three to six years to complete a full circuit of the Sun.
Asteroids have been historically observed from Earth; the Galileo spacecraft provided the first close observation of an asteroid. Several dedicated missions to asteroids were subsequently launched by NASA and JAXA, with plans for other missions in progress. NASA's NEAR Shoemaker studied Eros, and Dawn observed Vesta and Ceres. JAXA's missions Hayabusa and Hayabusa2 studied and returned samples of Itokawa and Ryugu, respectively. OSIRIS-REx studied Bennu, collecting a sample in 2020 which was delivered back to Earth in 2023. NASA's Lucy, launched in 2021, will study ten different asteroids, two from the main belt and eight Jupiter trojans. Psyche, launched in October 2023, will study a metallic asteroid of the same name.
Near-Earth asteroids can threaten all life on the planet; an asteroid impact event resulted in the Cretaceous–Paleogene extinction. Different asteroid deflection strategies have been proposed; the Double Asteroid Redirection Test spacecraft, or DART, was launched in 2021 and intentionally impacted Dimorphos in September 2022, successfully altering its orbit by crashing into it.
History of observations
Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye. When favorably positioned, 4 Vesta can be seen in dark skies. Rarely, small asteroids passing close to Earth may be visible to the naked eye for a short amount of time. , the Minor Planet Center had data on 1,199,224 minor planets in the inner and outer Solar System, of which about 614,690 had enough information to be given numbered designations.
Discovery of Ceres
In 1772, German astronomer Johann Elert Bode, citing Johann Daniel Titius, published a numerical procession known as the Titius–Bode law (now discredited). Except for an unexplained gap between Mars and Jupiter, Bode's formula seemed to predict the orbits of the known planets. He wrote the following explanation for the existence of a "missing planet":
This latter point seems in particular to follow from the astonishing relation which the known six planets observe in their distances from the Sun. Let the distance from the Sun to Saturn be taken as 100, then Mercury is separated by 4 such parts from the Sun. Venus is 4 + 3 = 7. The Earth 4 + 6 = 10. Mars 4 + 12 = 16. Now comes a gap in this so orderly progression. After Mars there follows a space of 4 + 24 = 28 parts, in which no planet has yet been seen. Can one believe that the Founder of the universe had left this space empty? Certainly not. From here we come to the distance of Jupiter by 4 + 48 = 52 parts, and finally to that of Saturn by 4 + 96 = 100 parts.
Bode's formula predicted another planet would be found with an orbital radius near 2.8 astronomical units (AU), or 420 million km, from the Sun. The Titius–Bode law got a boost with William Herschel's discovery of Uranus near the predicted distance for a planet beyond Saturn. In 1800, a group headed by Franz Xaver von Zach, editor of the German astronomical journal Monatliche Correspondenz (Monthly Correspondence), sent requests to 24 experienced astronomers (whom he dubbed the "celestial police"), asking that they combine their efforts and begin a methodical search for the expected planet. Although they did not discover Ceres, they later found the asteroids 2 Pallas, 3 Juno and 4 Vesta.
One of the astronomers selected for the search was Giuseppe Piazzi, a Catholic priest at the Academy of Palermo, Sicily. Before receiving his invitation to join the group, Piazzi discovered Ceres on 1 January 1801. He was searching for "the 87th [star] of the Catalogue of the Zodiacal stars of Mr la Caille", but found that "it was preceded by another". Instead of a star, Piazzi had found a moving star-like object, which he first thought was a comet:
The light was a little faint, and of the colour of Jupiter, but similar to many others which generally are reckoned of the eighth magnitude. Therefore I had no doubt of its being any other than a fixed star. [...] The evening of the third, my suspicion was converted into certainty, being assured it was not a fixed star. Nevertheless before I made it known, I waited till the evening of the fourth, when I had the satisfaction to see it had moved at the same rate as on the preceding days.
Piazzi observed Ceres a total of 24 times, the final time on 11 February 1801, when illness interrupted his work. He announced his discovery on 24 January 1801 in letters to only two fellow astronomers, his compatriot Barnaba Oriani of Milan and Bode in Berlin. He reported it as a comet but "since its movement is so slow and rather uniform, it has occurred to me several times that it might be something better than a comet". In April, Piazzi sent his complete observations to Oriani, Bode, and French astronomer Jérôme Lalande. The information was published in the September 1801 issue of the Monatliche Correspondenz.
By this time, the apparent position of Ceres had changed (mostly due to Earth's motion around the Sun), and was too close to the Sun's glare for other astronomers to confirm Piazzi's observations. Toward the end of the year, Ceres should have been visible again, but after such a long time it was difficult to predict its exact position. To recover Ceres, mathematician Carl Friedrich Gauss, then 24 years old, developed an efficient method of orbit determination. In a few weeks, he predicted the path of Ceres and sent his results to von Zach. On 31 December 1801, von Zach and fellow celestial policeman Heinrich W. M. Olbers found Ceres near the predicted position and thus recovered it. At 2.8 AU from the Sun, Ceres appeared to fit the Titius–Bode law almost perfectly; however, Neptune, once discovered in 1846, was 8 AU closer than predicted, leading most astronomers to conclude that the law was a coincidence. Piazzi named the newly discovered object Ceres Ferdinandea, "in honor of the patron goddess of Sicily and of King Ferdinand of Bourbon".
Further search
Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered by von Zach's group over the next few years, with Vesta found in 1807. No new asteroids were discovered until 1845. Amateur astronomer Karl Ludwig Hencke started his searches of new asteroids in 1830, and fifteen years later, while looking for Vesta, he found the asteroid later named 5 Astraea. It was the first new asteroid discovery in 38 years. Carl Friedrich Gauss was given the honor of naming the asteroid. After this, other astronomers joined; 15 asteroids were found by the end of 1851. In 1868, when James Craig Watson discovered the 100th asteroid, the French Academy of Sciences engraved the faces of Karl Theodor Robert Luther, John Russell Hind, and Hermann Goldschmidt, the three most successful asteroid-hunters at that time, on a commemorative medallion marking the event.
In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, some calling them "vermin of the skies", a phrase variously attributed to Eduard Suess and Edmund Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named.
19th and 20th centuries
In the past, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope, or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. A body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations.
These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ). The last step is sending the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union.
Naming
By 1851, the Royal Astronomical Society decided that asteroids were being discovered at such a rapid rate that a different system was needed to categorize or name asteroids. In 1852, when de Gasparis discovered the twentieth asteroid, Benjamin Valz gave it a name and a number designating its rank among asteroid discoveries, 20 Massalia. Sometimes asteroids were discovered and not seen again. So, starting in 1892, new asteroids were listed by the year and a capital letter indicating the order in which the asteroid's orbit was calculated and registered within that specific year. For example, the first two asteroids discovered in 1892 were labeled 1892A and 1892B. However, there were not enough letters in the alphabet for all of the asteroids discovered in 1893, so 1893Z was followed by 1893AA. A number of variations of these methods were tried, including designations that included year plus a Greek letter in 1914. A simple chronological numbering system was established in 1925.
Currently all newly discovered asteroids receive a provisional designation (such as ) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. ). The formal naming convention uses parentheses around the number—e.g. (433) Eros—but dropping the parentheses is quite common. Informally, it is also common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text. In addition, names can be proposed by the asteroid's discoverer, within guidelines established by the International Astronomical Union.
Symbols
The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1855 there were two dozen asteroid symbols, which often occurred in multiple variants.
In 1851, after the fifteenth asteroid, Eunomia, had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the Berliner Astronomisches Jahrbuch (BAJ, Berlin Astronomical Yearbook). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid. The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years. 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides.
Terminology
The first discovered asteroid, Ceres, was originally considered a new planet. It was followed by the discovery of other similar bodies, which with the equipment of the time appeared to be points of light like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term asteroid, coined in Greek as ἀστεροειδής, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the 19th century, the terms asteroid and planet (not always qualified as "minor") were still used interchangeably.
Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. The term asteroid never had a formal definition, with the broader term small Solar System bodies being preferred by the International Astronomical Union (IAU). As no IAU definition exists, asteroid can be defined as "an irregularly shaped rocky body orbiting the Sun that does not qualify as a planet or a dwarf planet under the IAU definitions of those terms".
When found, asteroids were seen as a class of objects distinct from comets, and there was no unified term for the two until small Solar System body was coined in 2006. The main difference between an asteroid and a comet is that a comet shows a coma due to sublimation of near-surface ices by solar radiation. A few objects have ended up being dual-listed because they were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; "asteroids" with notably eccentric orbits are probably dormant or extinct comets.
For almost two centuries, from the discovery of Ceres in 1801 until the discovery of the first centaur, 2060 Chiron in 1977, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few such as 944 Hidalgo ventured far beyond Jupiter for part of their orbit. When astronomers started finding more small bodies that permanently resided further out than Jupiter, now called centaurs, they numbered them among the traditional asteroids. There was debate over whether these objects should be considered asteroids or given a new classification. Then, when the first trans-Neptunian object (other than Pluto), 15760 Albion, was discovered in 1992, and especially when large numbers of similar objects started turning up, new terms were invented to sidestep the issue: Kuiper-belt object, trans-Neptunian object, scattered-disc object, and so on. They inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies are not expected to exhibit much cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets and not asteroids.
The innermost of these are the Kuiper-belt objects, called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Furthermore, most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. (The much more distant Oort cloud is hypothesized to be the main reservoir of dormant comets.) Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.
The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term asteroid to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.
When the IAU introduced the class small Solar System bodies in 2006 to include most objects previously classified as minor planets and comets, they created the class of dwarf planets for the largest minor planets—those that have enough mass to have become ellipsoidal under their own gravity. According to the IAU, "the term 'minor planet' may still be used, but generally, the term 'Small Solar System Body' will be preferred." Currently only the largest object in the asteroid belt, Ceres, at about across, has been placed in the dwarf planet category.
Formation
Many asteroids are the shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. It is thought that planetesimals in the asteroid belt evolved much like the rest of objects in the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust.
In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres.
Distribution within the Solar System
Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include:
Asteroid belt
The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter.
Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that reaching an asteroid without aiming carefully would be improbable. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has between 700,000 and 1.7 million asteroids with a diameter of 1 km or more. The absolute magnitudes of most of the known asteroids are between 11 and 19, with the median at about 16.
The total mass of the asteroid belt is estimated to be kg, which is just 3% of the mass of the Moon; the mass of the Kuiper Belt and Scattered Disk is over 100 times as large. The four largest objects, Ceres, Vesta, Pallas, and Hygiea, account for maybe 62% of the belt's total mass, with 39% accounted for by Ceres alone.
Trojans
Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, and , which lie 60° ahead of and behind the larger body.
In the Solar System, most known trojans share the orbit of Jupiter. They are divided into the Greek camp at (ahead of Jupiter) and the Trojan camp at (trailing Jupiter). More than a million Jupiter trojans larger than one kilometer are thought to exist, of which more than 7,000 are currently catalogued. In other planetary orbits only nine Mars trojans, 28 Neptune trojans, two Uranus trojans, and two Earth trojans, have been found to date. A temporary Venus trojan is also known. Numerical orbital dynamics stability simulations indicate that Saturn and Uranus probably do not have any primordial trojans.
Near-Earth asteroids
Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , a total of 28,772 near-Earth asteroids were known; 878 have a diameter of one kilometer or larger.
A small number of NEAs are extinct comets that have lost their volatile surface materials, although having a faint or intermittent comet-like tail does not necessarily result in a classification as a near-Earth comet, making the boundaries somewhat fuzzy. The rest of the near-Earth asteroids are driven out of the asteroid belt by gravitational interactions with Jupiter.
Many asteroids have natural satellites (minor-planet moons). , there were 85 NEAs known to have at least one moon, including three known to have two moons. The asteroid 3122 Florence, one of the largest potentially hazardous asteroids with a diameter of , has two moons measuring across, which were discovered by radar imaging during the asteroid's 2017 approach to Earth.
Near-Earth asteroids are divided into groups based on their semi-major axis (a), perihelion distance (q), and aphelion distance (Q):
The Atiras or Apoheles have orbits strictly inside Earth's orbit: an Atira asteroid's aphelion distance (Q) is smaller than Earth's perihelion distance (0.983 AU). That is, , which implies that the asteroid's semi-major axis is also less than 0.983 AU.
The Atens have a semi-major axis of less than 1 AU and cross Earth's orbit. Mathematically, and . (0.983 AU is Earth's perihelion distance.)
The Apollos have a semi-major axis of more than 1 AU and cross Earth's orbit. Mathematically, and . (1.017 AU is Earth's aphelion distance.)
The Amors have orbits strictly outside Earth's orbit: an Amor asteroid's perihelion distance (q) is greater than Earth's aphelion distance (1.017 AU). Amor asteroids are also near-earth objects so . In summary, . (This implies that the asteroid's semi-major axis (a) is also larger than 1.017 AU.) Some Amor asteroid orbits cross the orbit of Mars.
Martian moons
It is unclear whether Martian moons Phobos and Deimos are captured asteroids or were formed due to impact event on Mars. Phobos and Deimos both have much in common with carbonaceous C-type asteroids, with spectra, albedo, and density very similar to those of C- or D-type asteroids. Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids. Both moons have very circular orbits which lie almost exactly in Mars's equatorial plane, and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces, although it is not clear whether sufficient time was available for this to occur for Deimos. Capture also requires dissipation of energy. The current Martian atmosphere is too thin to capture a Phobos-sized object by atmospheric braking. Geoffrey A. Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces.
Phobos could be a second-generation Solar System object that coalesced in orbit after Mars formed, rather than forming concurrently out of the same birth cloud as Mars.
Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal. The high porosity of the interior of Phobos (based on the density of 1.88 g/cm3, voids are estimated to comprise 25 to 35 percent of Phobos's volume) is inconsistent with an asteroidal origin. Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates, which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's moon.
Characteristics
Size distribution
Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across, below which an object is classified as a meteoroid. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either battered planetesimals or fragments of larger bodies.
The dwarf planet Ceres is by far the largest asteroid, with a diameter of . The next largest are 4 Vesta and 2 Pallas, both with diameters of just over . Vesta is the brightest of the four main-belt asteroids that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis.
The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be , ≈ 3.25% of the mass of the Moon. Of this, Ceres comprises , about 40% of the total. Adding in the next three most massive objects, Vesta (11%), Pallas (8.5%), and Hygiea (3–4%), brings this figure up to a bit over 60%, whereas the next seven most-massive asteroids bring the total up to 70%. The number of asteroids increases rapidly as their individual masses decrease.
The number of asteroids decreases markedly with increasing size. Although the size distribution generally follows a power law, there are 'bumps' at about and , where more asteroids than expected from such a curve are found. Most asteroids larger than approximately 120 km in diameter are primordial (surviving from the accretion epoch), whereas most smaller asteroids are products of fragmentation of primordial asteroids. The primordial population of the main belt was probably 200 times what it is today.
Largest asteroids
Three largest objects in the asteroid belt, Ceres, Vesta, and Pallas, are intact protoplanets that share many characteristics common to planets, and are atypical compared to the majority of irregularly shaped asteroids. The fourth-largest asteroid, Hygiea, appears nearly spherical although it may have an undifferentiated interior, like the majority of asteroids. The four largest asteroids constitute half the mass of the asteroid belt.
Ceres is the only asteroid that appears to have a plastic shape under its own gravity and hence the only one that is a dwarf planet. It has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. No meteorites from Ceres have been found on Earth.
Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock with minerals such as olivine. Aside from the large crater at its southern pole, Rheasilvia, Vesta also has an ellipsoidal shape. Vesta is the parent body of the Vestian family and other V-type asteroids, and is the source of the HED meteorites, which constitute 5% of all meteorites on Earth.
Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Pallas is the parent body of the Palladian family of asteroids.
Hygiea is the largest carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic. It is the largest member and presumed parent body of the Hygiean family of asteroids. Because there is no sufficiently large crater on the surface to be the source of that family, as there is on Vesta, it is thought that Hygiea may have been completely disrupted in the collision that formed the Hygiean family and recoalesced after losing a bit less than 2% of its mass. Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018, revealed that Hygiea has a nearly spherical shape, which is consistent both with it being in hydrostatic equilibrium, or formerly being in hydrostatic equilibrium, or with being disrupted and recoalescing.
Internal differentiation of large asteroids is possibly related to their lack of natural satellites, as satellites of main belt asteroids are mostly believed to form from collisional disruption, creating a rubble pile structure.
Rotation
Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. Very few asteroids with a diameter larger than 100 meters have a rotation period less than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertial force at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through the accumulation of debris after collisions between asteroids.
Color
Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousand years, limiting the usefulness of spectral measurement for determining the age of asteroids.
Surface features
Except for the "big four" (Ceres, Pallas, Vesta, and Hygiea), asteroids are likely to be broadly similar in appearance, if irregular in shape. 253 Mathilde is a rubble pile saturated with craters with diameters the size of the asteroid's radius. Earth-based observations of 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida, that have been observed up close, also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid.
Dawn spacecraft revealed that Ceres has a heavily cratered surface, but with fewer large craters than expected. Models based on the formation of the current asteroid belt had suggested Ceres should possess 10 to 15 craters larger than in diameter. The largest confirmed crater on Ceres, Kerwan Basin, is across. The most likely reason for this is viscous relaxation of the crust slowly flattening out larger impacts.
Composition
Asteroids are classified by their characteristic emission spectra, with the majority falling into three main groups: C-type, M-type, and S-type. These were named after and are generally identified with carbonaceous (carbon-rich), metallic, and silicaceous (stony) compositions, respectively. The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle, where Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. Thought to be the largest undifferentiated asteroid, 10 Hygiea seems to have a uniformly primitive composition of carbonaceous chondrite, but it may actually be a differentiated asteroid that was globally disrupted by an impact and then reassembled. Other asteroids appear to be the remnant cores or mantles of proto-planets, high in rock and metal. Most small asteroids are believed to be piles of rubble held together loosely by gravity, although the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: rubble piles, moons, binaries, and scattered asteroid families are thought to be the results of collisions that disrupted a parent asteroid, or possibly a planet.
In the main asteroid belt, there appear to be two primary populations of asteroid: a dark, volatile-rich population, consisting of the C-type and P-type asteroids, with albedos less than 0.10 and densities under , and a dense, volatile-poor population, consisting of the S-type and M-type asteroids, with albedos over 0.15 and densities greater than 2.7. Within these populations, larger asteroids are denser, presumably due to compression. There appears to be minimal macro-porosity (interstitial vacuum) in the score of asteroids with masses greater than .
Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Few asteroids are larger than 87 Sylvia, none of them have moons. The fact that such large asteroids as Sylvia may be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar System: computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly.
Water
Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. In 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA's Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimating, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. The presence of ice on 24 Themis makes the initial theory plausible.
In October 2013, water was detected on an extrasolar body for the first time, on an asteroid orbiting the white dwarf GD 61. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids."
Findings have shown that solar winds can react with the oxygen in the upper layer of the asteroids and create water. It has been estimated that "every cubic metre of irradiated rock could contain up to 20 litres"; study was conducted using an atom probe tomography, numbers are given for the Itokawa S-type asteroid.
Acfer 049, a meteorite discovered in Algeria in 1990, was shown in 2019 to have an ultraporous lithology (UPL): porous texture that could be formed by removal of ice that filled these pores, this suggests that UPL "represent fossils of primordial ice".
Organic compounds
Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth (an event called "panspermia"). In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space.
In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia.
Classification
Asteroids are commonly categorized according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum.
Orbital classification
Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families are more common and easier to identify within the main asteroid belt, but several small families have been reported among the Jupiter trojans. Main belt families were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor.
About 30–35% of the bodies in the asteroid belt belong to dynamical families, each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet .
Some asteroids have unusual horseshoe orbits that are co-orbital with Earth or another planet. Examples are 3753 Cruithne and . The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus. Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites.
Such objects, if associated with Earth or Venus or even hypothetically Mercury, are a special class of Aten asteroids. However, such objects could be associated with the outer planets as well.
Spectral classification
In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Chapman, Morrison, and Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied.
The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes.
The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals.
Problems
Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of the same (or similar) materials.
Active asteroids
Active asteroids are objects that have asteroid-like orbits but show comet-like visual characteristics. That is, they show comae, tails, or other visual evidence of mass-loss (like a comet), but their orbit remains within Jupiter's orbit (like an asteroid). These bodies were originally designated main-belt comets (MBCs) in 2006 by astronomers David Jewitt and Henry Hsieh, but this name implies they are necessarily icy in composition like a comet and that they only exist within the main-belt, whereas the growing population of active asteroids shows that this is not always the case.
The first active asteroid discovered is 7968 Elst–Pizarro. It was discovered (as an asteroid) in 1979 but then was found to have a tail by Eric Elst and Guido Pizarro in 1996 and given the cometary designation 133P/Elst-Pizarro. Another notable object is 311P/PanSTARRS: observations made by the Hubble Space Telescope revealed that it had six comet-like tails. The tails are suspected to be streams of material ejected by the asteroid as a result of a rubble pile asteroid spinning fast enough to remove material from it.
By smashing into the asteroid Dimorphos, NASA's Double Asteroid Redirection Test spacecraft made it an active asteroid. Scientists had proposed that some active asteroids are the result of impact events, but no one had ever observed the activation of an asteroid. The DART mission activated Dimorphos under precisely known and carefully observed impact conditions, enabling the detailed study of the formation of an active asteroid for the first time. Observations show that Dimorphos lost approximately 1 million kilograms after the collision. Impact produced a dust plume that temporarily brightened the Didymos system and developed a -long dust tail that persisted for several months.
Observation and exploration
Until the age of space travel, objects in the asteroid belt could only be observed with large telescopes, their shapes and terrain remaining a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can only resolve a small amount of detail on the surfaces of the largest asteroids. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (variation in brightness during rotation) and their spectral properties. Sizes can be estimated by timing the lengths of star occultations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. Spacecraft flybys can provide much more data than any ground or space-based observations; sample-return missions gives insights about regolith composition.
Ground-based observations
As asteroids are rather small and faint objects, the data that can be obtained from ground-based observations (GBO) are limited. By means of ground-based optical telescopes the visual magnitude can be obtained; when converted into the absolute magnitude it gives a rough estimate of the asteroid's size. Light-curve measurements can also be made by GBO; when collected over a long period of time it allows an estimate of the rotational period, the pole orientation (sometimes), and a rough estimate of the asteroid's shape. Spectral data (both visible-light and near-infrared spectroscopy) gives information about the object's composition, used to classify the observed asteroids. Such observations are limited as they provide information about only the thin layer on the surface (up to several micrometers). As planetologist Patrick Michel writes:
Mid- to thermal-infrared observations, along with polarimetry measurements, are probably the only data that give some indication of actual physical properties. Measuring the heat flux of an asteroid at a single wavelength gives an estimate of the dimensions of the object; these measurements have lower uncertainty than measurements of the reflected sunlight in the visible-light spectral region. If the two measurements can be combined, both the effective diameter and the geometric albedo—the latter being a measure of the brightness at zero phase angle, that is, when illumination comes from directly behind the observer—can be derived. In addition, thermal measurements at two or more wavelengths, plus the brightness in the visible-light region, give information on the thermal properties. The thermal inertia, which is a measure of how fast a material heats up or cools off, of most observed asteroids is lower than the bare-rock reference value but greater than that of the lunar regolith; this observation indicates the presence of an insulating layer of granular material on their surface. Moreover, there seems to be a trend, perhaps related to the gravitational environment, that smaller objects (with lower gravity) have a small regolith layer consisting of coarse grains, while larger objects have a thicker regolith layer consisting of fine grains. However, the detailed properties of this regolith layer are poorly known from remote observations. Moreover, the relation between thermal inertia and surface roughness is not straightforward, so one needs to interpret the thermal inertia with caution.
Near-Earth asteroids that come into close vicinity of the planet can be studied in more details with radar; it provides information about the surface of the asteroid (for example can show the presence of craters and boulders). Such observations were conducted by the Arecibo Observatory in Puerto Rico (305 meter dish) and Goldstone Observatory in California (70 meter dish). Radar observations can also be used for accurate determination of the orbital and rotational dynamics of observed objects.
Space-based observations
Both space and ground-based observatories conducted asteroid search programs; the space-based searches are expected to detect more objects because there is no atmosphere to interfere and because they can observe larger portions of the sky. NEOWISE observed more than 100,000 asteroids of the main belt, Spitzer Space Telescope observed more than 700 near-Earth asteroids. These observations determined rough sizes of the majority of observed objects, but provided limited detail about surface properties (such as regolith depth and composition, angle of repose, cohesion, and porosity).
Asteroids were also studied by the Hubble Space Telescope, such as tracking the colliding asteroids in the main belt, break-up of an asteroid, observing an active asteroid with six comet-like tails, and observing asteroids that were chosen as targets of dedicated missions.
Space probe missions
According to Patrick Michel
The internal structure of asteroids is inferred only from indirect evidence: bulk densities measured by spacecraft, the orbits of natural satellites in the case of asteroid binaries, and the drift of an asteroid's orbit due to the Yarkovsky thermal effect. A spacecraft near an asteroid is perturbed enough by the asteroid's gravity to allow an estimate of the asteroid's mass. The volume is then estimated using a model of the asteroid's shape. Mass and volume allow the derivation of the bulk density, whose uncertainty is usually dominated by the errors made on the volume estimate. The internal porosity of asteroids can be inferred by comparing their bulk density with that of their assumed meteorite analogues, dark asteroids seem to be more porous (>40%) than bright ones. The nature of this porosity is unclear.
Dedicated missions
The first asteroid to be photographed in close-up was 951 Gaspra in 1991, followed in 1993 by 243 Ida and its moon Dactyl, all of which were imaged by the Galileo probe en route to Jupiter. Other asteroids briefly visited by spacecraft en route to other destinations include 9969 Braille (by Deep Space 1 in 1999), 5535 Annefrank (by Stardust in 2002), 2867 Šteins and 21 Lutetia (by the Rosetta probe in 2008), and 4179 Toutatis (China's lunar orbiter Chang'e 2, which flew within in 2012).
The first dedicated asteroid probe was NASA's NEAR Shoemaker, which photographed 253 Mathilde in 1997, before entering into orbit around 433 Eros, finally landing on its surface in 2001. It was the first spacecraft to successfully orbit and land on an asteroid. From September to November 2005, the Japanese Hayabusa probe studied 25143 Itokawa in detail and returned samples of its surface to Earth on 13 June 2010, the first asteroid sample-return mission. In 2007, NASA launched the Dawn spacecraft, which orbited 4 Vesta for a year, and observed the dwarf planet Ceres for three years.
Hayabusa2, a probe launched by JAXA 2014, orbited its target asteroid 162173 Ryugu for more than a year and took samples that were delivered to Earth in 2020. The spacecraft is now on an extended mission and expected to arrive at a new target in 2031.
NASA launched the OSIRIS-REx in 2016, a sample return mission to asteroid 101955 Bennu. In 2021, the probe departed the asteroid with a sample from its surface. Sample was delivered to Earth in September 2023. The spacecraft continues its extended mission, designated OSIRIS-APEX, to explore near-Earth asteroid Apophis in 2029.
In 2021, NASA launched Double Asteroid Redirection Test (DART), a mission to test technology for defending Earth against potential hazardous objects. DART deliberately crashed into the minor-planet moon Dimorphos of the double asteroid Didymos in September 2022 to assess the potential of a spacecraft impact to deflect an asteroid from a collision course with Earth. In October, NASA declared DART a success, confirming it had shortened Dimorphos' orbital period around Didymos by about 32 minutes.
Planned missions
Currently, several asteroid-dedicated missions are planned by NASA, JAXA, ESA, and CNSA.
NASA's Lucy, launched in 2021, would visit eight asteroids, one from the main belt and seven Jupiter trojans; it is the first mission to trojans. The main mission would start in 2027.
NASA's Psyche, launched in October 2023, will study the large metallic asteroid of the same name, and will arrive there in 2029.
ESA's Hera, planned for launch in 2024, will study the results of the DART impact. It will measure the size and morphology of the crater, and momentum transmitted by the impact, to determine the efficiency of the deflection produced by DART.
JAXA's DESTINY+ is a mission for a flyby of the Geminids meteor shower parent body 3200 Phaethon, as well as various minor bodies. Its launch is planned for 2024.
CNSA's Tianwen-2 is planned to launch in 2025. It will use solar electric propulsion to explore the co-orbital near-Earth asteroid 469219 Kamoʻoalewa and the active asteroid 311P/PanSTARRS. The spacecraft will collect samples of the regolith of Kamo'oalewa.
Asteroid mining
The concept of asteroid mining was proposed in 1970s. Matt Anderson defines successful asteroid mining as "the development of a mining program that is both financially self-sustaining and profitable to its investors". It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on Earth, or materials for constructing space habitats. Materials that are heavy and expensive to launch from Earth may someday be mined from asteroids and used for space manufacturing and construction.
As resource depletion on Earth becomes more real, the idea of extracting valuable elements from asteroids and returning these to Earth for profit, or using space-based resources to build solar-power satellites and space habitats, becomes more attractive. Hypothetically, water processed from ice could refuel orbiting propellant depots.
From the astrobiological perspective, asteroid prospecting could provide scientific data for the search for extraterrestrial intelligence (SETI). Some astrophysicists have suggested that if advanced extraterrestrial civilizations employed asteroid mining long ago, the hallmarks of these activities might be detectable.
Mining Ceres is also considered a possibility. As the largest body in the asteroid belt, Ceres could become the main base and transport hub for future asteroid mining infrastructure, allowing mineral resources to be transported to Mars, the Moon, and Earth. Because of its small escape velocity combined with large amounts of water ice, it also could serve as a source of water, fuel, and oxygen for ships going through and beyond the asteroid belt. Transportation from Mars or the Moon to Ceres would be even more energy-efficient than transportation from Earth to the Moon.
Threats to Earth
There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth. The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens.
The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact.
Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across.
All of these considerations helped spur the launch of highly efficient surveys, consisting of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. A list of teams using such systems includes:
Lincoln Near-Earth Asteroid Research (LINEAR)
Near-Earth Asteroid Tracking (NEAT)
Spacewatch
Lowell Observatory Near-Earth-Object Search (LONEOS)
Catalina Sky Survey (CSS)
Pan-STARRS
NEOWISE
Asteroid Terrestrial-impact Last Alert System (ATLAS)
Campo Imperatore Near-Earth Object Survey (CINEOS)
Japanese Spaceguard Association
Asiago-DLR Asteroid Survey (ADAS)
, the LINEAR system alone had discovered 147,132 asteroids. Among the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter.
In April 2018, the B612 Foundation reported "It is 100 percent certain we'll be hit [by a devastating asteroid], but we're not 100 percent sure when." In June 2018, the National Science and Technology Council warned that the United States is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.
The United Nations declared 30 June to be International Asteroid Day to educate the public about asteroids. The date of International Asteroid Day commemorates the anniversary of the Tunguska asteroid impact over Siberia, on 30 June 1908.
Chicxulub impact
The Chicxulub crater is an impact crater buried underneath the Yucatán Peninsula in Mexico. Its center is offshore near the communities of Chicxulub Puerto and Chicxulub Pueblo, after which the crater is named. It was formed when a large asteroid, about in diameter, struck the Earth. The crater is estimated to be in diameter and in depth. It is one of the largest confirmed impact structures on Earth, and the only one whose peak ring is intact and directly accessible for scientific research.
In the late 1970s, geologist Walter Alvarez and his father, Nobel Prize–winning scientist Luis Walter Alvarez, put forth their theory that the Cretaceous–Paleogene extinction was caused by an impact event. The main evidence of such an impact was contained in a thin layer of clay present in the K–Pg boundary in Gubbio, Italy. The Alvarezes and colleagues reported that it contained an abnormally high concentration of iridium, a chemical element rare on earth but common in asteroids. Iridium levels in this layer were as much as 160 times above the background level. It was hypothesized that the iridium was spread into the atmosphere when the impactor was vaporized and settled across the Earth's surface among other material thrown up by the impact, producing the layer of iridium-enriched clay. At the time, consensus was not settled on what caused the Cretaceous–Paleogene extinction and the boundary layer, with theories including a nearby supernova, climate change, or a geomagnetic reversal. The Alvarezes' impact hypothesis was rejected by many paleontologists, who believed that the lack of fossils found close to the K–Pg boundary—the "three-meter problem"—suggested a more gradual die-off of fossil species.
There is broad consensus that the Chicxulub impactor was an asteroid with a carbonaceous chondrite composition, rather than a comet. The impactor was around in diameter—large enough that, if set at sea level, it would have reached taller than Mount Everest.
Asteroid deflection strategies
Various collision avoidance techniques have different trade-offs with respect to metrics such as overall performance, cost, failure risks, operations, and technology readiness. There are various methods for changing the course of an asteroid/comet. These can be differentiated by various types of attributes such as the type of mitigation (deflection or fragmentation), energy source (kinetic, electromagnetic, gravitational, solar/thermal, or nuclear), and approach strategy (interception, rendezvous, or remote station).
Strategies fall into two basic sets: fragmentation and delay. Fragmentation concentrates on rendering the impactor harmless by fragmenting it and scattering the fragments so that they miss the Earth or are small enough to burn up in the atmosphere. Delay exploits the fact that both the Earth and the impactor are in orbit. An impact occurs when both reach the same point in space at the same time, or more correctly when some point on Earth's surface intersects the impactor's orbit when the impactor arrives. Since the Earth is approximately 12,750 km in diameter and moves at approx. 30 km per second in its orbit, it travels a distance of one planetary diameter in about 425 seconds, or slightly over seven minutes. Delaying, or advancing the impactor's arrival by times of this magnitude can, depending on the exact geometry of the impact, cause it to miss the Earth.
"Project Icarus" was one of the first projects designed in 1967 as a contingency plan in case of collision with 1566 Icarus. The plan relied on the new Saturn V rocket, which did not make its first flight until after the report had been completed. Six Saturn V rockets would be used, each launched at variable intervals from months to hours away from impact. Each rocket was to be fitted with a single 100-megaton nuclear warhead as well as a modified Apollo Service Module and uncrewed Apollo Command Module for guidance to the target. The warheads would be detonated 30 meters from the surface, deflecting or partially destroying the asteroid. Depending on the subsequent impacts on the course or the destruction of the asteroid, later missions would be modified or cancelled as needed. The "last-ditch" launch of the sixth rocket would be 18 hours prior to impact.
Fiction
Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spacecraft traveling between two other points, and as a threat to life on Earth or other inhabited planets, dwarf planets, and natural satellites by potential impact.
See also
List of asteroid close approaches to Earth
List of exceptional asteroids
Lost minor planet
Meanings of minor-planet names
Notes
References
Further reading
External links
NASA Asteroid and Comet Watch site
Minor planets
|
https://en.wikipedia.org/wiki/Argon
|
Argon is a chemical element with the symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third-most abundant gas in Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust.
Nearly all of the argon in Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas.
The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990.
Argon is extracted industrially by the fractional distillation of liquid air. Argon is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. Argon is also used in incandescent, fluorescent lighting, and other gas-discharge tubes. Argon makes a distinctive blue-green gas laser. Argon is also used in fluorescent glow starters.
Characteristics
Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature.
Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized.
History
Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785.
Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon.
Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements.
Until 1957, the symbol for argon was "A", but now it is "Ar".
Occurrence
Argon constitutes 0.934% by volume and 1.288% by mass of Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively.
Isotopes
The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating.
In Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days.
Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes.
The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as , and its content may be as high as 1.93% (Mars).
The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table).
Compounds
Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However, it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space.
Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa.
Production
Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year.
Applications
Argon has several desirable properties:
Argon is a chemically inert gas.
Argon is the cheapest alternative when nitrogen is not sufficiently inert.
Argon has low thermal conductivity.
Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications.
Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. Argon is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of argon applications arise simply because it is inert and relatively cheap.
Industrial processes
Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning.
For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium.
Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life.
Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam.
Scientific research
Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions.
At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials.
Preservative
Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon.
In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry.
Argon is sometimes used as the propellant in aerosol cans.
Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage.
Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced.
Laboratory equipment
Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus.
Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication.
Medical use
Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient.
Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects.
Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood.
Lighting
Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers.
Miscellaneous uses
Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity.
Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure.
Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating are used to date sedimentary, metamorphic, and igneous rocks.
Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse.
Safety
Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling.
See also
Industrial gas
Oxygen–argon ratio, a ratio of two physically similar gases, which has importance in various sectors.
References
Further reading
On triple point pressure at 69 kPa.
On triple point pressure at 83.8058 K.
External links
Argon at The Periodic Table of Videos (University of Nottingham)
USGS Periodic Table – Argon
Diving applications: Why Argon?
Chemical elements
E-number additives
Noble gases
Industrial gases
|
https://en.wikipedia.org/wiki/Arsenic
|
Arsenic is a chemical element with the symbol As and atomic number 33. Arsenic occurs in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. Arsenic is a metalloid. It has various allotropes, but only the grey form, which has a metallic appearance, is important to industry.
The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is a common n-type dopant in semiconductor electronic devices. It is also a component of the III–V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds.
A few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic are an essential dietary element in rats, hamsters, goats, chickens, and presumably other species. A role in human metabolism is not known. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world.
The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic as number 1 in its 2001 Priority List of Hazardous Substances at Superfund sites. Arsenic is classified as a Group-A carcinogen.
Characteristics
Physical characteristics
The three most common arsenic allotropes are grey, yellow, and black arsenic, with grey being the most common. Grey arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, grey arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Grey arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Grey arsenic is also the most stable form.
Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into grey arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus.
Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. Black arsenic is also a poor electrical conductor. As arsenic's triple point is at 3.628 MPa (35.81 atm), it does not have a melting point at standard pressure but instead sublimes from solid to vapor at 887 K (615 °C or 1137 °F).
Isotopes
Arsenic occurs in nature as one stable isotope, 75As, a monoisotopic element. As of 2003, at least 33 radioisotopes have also been synthesized, ranging in atomic mass from 60 to 92. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As (t1/2=65.30 hours), 72As (t1/2=26.0 hours), 74As (t1/2=17.77 days), 76As (t1/2=26.26 hours), and 77As (t1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions.
At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds.
Chemistry
Arsenic has a similar electronegativity and ionization energies to its lighter congener phosphorus and accordingly readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic (and some arsenic compounds) sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is 3.63 MPa and . Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the group oxidation state of +5 than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers.
Compounds
Compounds of arsenic resemble in some respects those of phosphorus which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons.
Inorganic compounds
One of the simplest arsenic compounds is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen.
Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and the salts are called arsenates, the most common arsenic contamination of groundwater, and a problem that affects many people. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons.
The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3.
A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes.
All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.)
Alloys
Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide.
Organoarsenic compounds
A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive odor; it is very poisonous.
Occurrence and production
Arsenic comprises about 1.5 ppm (0.00015%) of the Earth's crust, and is the 53rd most abundant element. Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater.
Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment.
In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust.
On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture.
History
The word arsenic has its origin in the Syriac word zarnika,
from Arabic al-zarnīḵ 'the orpiment', based on Persian zar ("gold") from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek (using folk etymology) as arsenikon () – a neuter form of the Greek adjective arsenikos (), meaning "male", "virile".
Latin-speakers adopted the Greek term as , which in French ultimately became , whence the English word "arsenic".
Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos () describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, the substance was frequently used for murder until the advent in the 1830s of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". Arsenic became known as "the inheritance powder" due to its use in killing family members in the Renaissance era.
During the Bronze Age, arsenic was often included in the manufacture of bronze, making the alloy harder (so-called "arsenical bronze").
Jabir ibn Hayyan described the isolation of arsenic before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rarely.
Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt through the reaction of potassium acetate with arsenic trioxide.
In the Victorian era, women would eat "arsenic" ("white arsenic" or arsenic trioxide) mixed with vinegar and chalk to improve the complexion of their faces, making their skin paler (to show they did not work in the fields). The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. From the late-18th century wallpaper production began to use dyes made from arsenic,
which was thought to increase the pigment's brightness. One account of the illness and 1821 death of Napoleon I implicates arsenic poisoning involving wallpaper.
Two arsenic pigments have been widely used since their discovery – Paris Green in 1814 and Scheele's Green in 1775. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942.
Applications
Agricultural
The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations).
Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out in the United States by 2013 in all agricultural activities except cotton farming.
The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity.
Arsenic is used as a feed additive in poultry and swine production, in particular it was used in the U.S. until 2015 to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers.In 2011, Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continued to sell nitarsone until 2015, primarily for use in turkeys.
A 2006 study of the remains of the Australian racehorse, Phar Lap, determined that the 1932 death of the famous champion was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system."
Medical use
During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler). Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis, since although these drugs have the disadvantage of severe toxicity, the disease is almost uniformly fatal if untreated.
Arsenic trioxide has been used in a variety of ways since the 15th century, most commonly in the treatment of cancer, but also in medications as diverse as Fowler's solution in psoriasis. The US Food and Drug Administration in the year 2000 approved this compound for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid.
A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations.
In subtoxic doses, soluble arsenic compounds act as stimulants, and were once popular in small doses as medicine by people in the mid-18th to 19th centuries; its use as a stimulant was especially prevalent as sport animals such as race horses or with work dogs.
Alloys
The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light.
Military
After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice.
Other uses
Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets.
Arsenic is used in bronzing and pyrotechnics.
As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets.
Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments.
Arsenic is also used for taxonomic sample preservation. It was also used in embalming fluids historically.
Arsenic was used as an opacifier in ceramics, creating white glazes.
Until recently, arsenic was used in optical glass. Modern glass manufacturers, under pressure from environmentalists, have ceased using both arsenic and lead.
In computers; arsenic is used in the chips as the n-type doping
Biological role
Bacteria
Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr).
In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain, PHS-1, has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues.
In 2011, it was postulated that a strain of Halomonadaceae could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups.
Essential trace element in higher animals
Arsenic is understood to be an essential trace mineral in birds as it is involved in the synthesis of methionine metabolites, with feeding recommendations being between 0.012 and 0.050 mg/kg.
Some evidence indicates that arsenic is an essential trace mineral in mammals. However, the biological function is not known.
Heredity
Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility.
The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation.
Biomethylation
Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 µg/day. Values about 1000 µg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic.
Environmental issues
Exposure
Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts.
During the Victorian era, arsenic was widely used in home decor, especially wallpapers.
Occurrence in drinking water
Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand, in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a 2017 report in Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level.
Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 µg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water.
A study by IIT Kharagpur found high levels of Arsenic in groundwater of 20% of India's land, exposing more than 250 million people. States such as Punjab, Bihar, West Bengal, Assam, Haryana, Uttar Pradesh, and Gujarat have highest land area exposed to arsenic.
In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 ppb drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits.
Low-level exposure to arsenic at concentrations of 100 ppb (i.e., above the 10 ppb drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus.
Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic.
Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke.
Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation.
Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminium oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap.
Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water.
Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced.
Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes.
Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 µg/L. This may find applications in areas where the potable water is extracted from underground aquifers.
San Pedro de Atacama
For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity.
Hazard maps for contaminated groundwater
Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground.
Redox transformation of arsenic in natural waters
Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution.
Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic.
The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are H3AsO4, H2AsO4−, HAsO42−, and AsO43− at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, H3AsO4 is predominant at pH 2–9.
Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments.
The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic.
Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic.
Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria.
Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where SO42− reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1.
Wood preservation in the US
As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole.
Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater.
Mapping of industrial releases in the US
One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources.
Bioremediation
Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered.
Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination.
Arsenic removal
Coagulation and flocculation
Coagulation and flocculation are closely related processes common in arsenate removal from water. Due to the net negative charge carried by arsenate ions, they settle slowly or do not settle at all due to charge repulsion. In coagulation, a positively charged coagulent such as Fe and Alum (common used salts: FeCl3, Fe2(SO4)3, Al2(SO4)3) neutralise the negatively charged arsenate, enable it to settle. Flocculation follows where an flocculant bridge smaller particles and allows the aggregate to precipitate out from water. However, such methods may not be efficient on arsenite as As(III) exist in uncharged arsenious acid, H3AsO3, at near neutral pH.
The major drawbacks of coagulation and flocculation is the costly disposal of arsenate-concentrated sludge, and possible secondary contamination of environment. Moreover, coagulents such as Fe may produce ion contamination that exceeds safety level.
Toxicity and precautions
Arsenic and many of its compounds are especially potent poisons. Small amount of arsenic can be detected by pharmacopoial methods which includes reduction of arsenic to arsenious with help of zinc and can be confirmed with mercuric chloride paper.
Classification
Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC.
The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens.
Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [; As(III)]".
Legal limits, food, and drink
In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb).
In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, the FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard.
Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic.
In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior.
Consumer Reports recommended:
That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production;
That the FDA establish a legal limit for food;
That industry change production practices to lower arsenic levels, especially in food for children; and
That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content).
Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice.
A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice.
Reducing arsenic content in rice
In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption.
Occupational exposure limits
Ecotoxicity
Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils.
Toxicity in animals
Biological mechanism
Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes.
Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur.
Exposure risks and remediation
Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry.
The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic.
Treatment
Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity.
See also
Aqua Tofana
Arsenic and Old Lace
Arsenic biochemistry
Arsenic compounds
Arsenic poisoning
Arsenic toxicity
Arsenic trioxide
Fowler's solution
GFAJ-1
Grainger challenge
Hypothetical types of biochemistry
Organoarsenic chemistry
Toxic heavy metal
White arsenic
References
Bibliography
Further reading
External links
Arsenic Cancer Causing Substances, U.S. National Cancer Institute.
CTD's Arsenic page and CTD's Arsenicals page from the Comparative Toxicogenomics Database
Arsenic intoxication: general aspects and chelating agents, by Geir Bjørklund, Massimiliano Peana et al. Archives of Toxicology (2020) 94:1879–1897.
A Small Dose of Toxicology
Arsenic in groundwater Book on arsenic in groundwater by IAH's Netherlands Chapter and the Netherlands Hydrological Society
Contaminant Focus: Arsenic by the EPA.
Environmental Health Criteria for Arsenic and Arsenic Compounds, 2001 by the WHO.
National Institute for Occupational Safety and Health – Arsenic Page
Arsenic at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Metalloids
Hepatotoxins
Pnictogens
Endocrine disruptors
IARC Group 1 carcinogens
Trigonal minerals
Minerals in space group 166
Teratogens
Fetotoxicants
Suspected testicular toxicants
Native element minerals
Chemical elements with rhombohedral structure
|
https://en.wikipedia.org/wiki/Antimony
|
Antimony is a chemical element with the symbol Sb () and atomic number 51. A lustrous gray metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name kohl. The earliest known description of the metalloid in the West was written in 1540 by Vannoccio Biringuccio.
China is the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. The industrial methods for refining antimony from stibnite are roasting followed by reduction with carbon, or direct reduction of stibnite with iron.
The largest applications for metallic antimony are in alloys with lead and tin, which have improved properties for solders, bullets, and plain bearings. It improves the rigidity of lead-alloy plates in lead–acid batteries. Antimony trioxide is a prominent additive for halogen-containing flame retardants. Antimony is used as a dopant in semiconductor devices.
Characteristics
Properties
Antimony is a member of group 15 of the periodic table, one of the elements called pnictogens, and has an electronegativity of 2.05. In accordance with periodic trends, it is more electronegative than tin or bismuth, and less electronegative than tellurium or arsenic. Antimony is stable in air at room temperature, but reacts with oxygen if heated to produce antimony trioxide, Sb2O3.
Antimony is a silvery, lustrous gray metalloid with a Mohs scale hardness of 3, which is too soft to mark hard objects. Coins of antimony were issued in China's Guizhou province in 1931; durability was poor, and minting was soon discontinued. Antimony is resistant to attack by acids.
Four allotropes of antimony are known: a stable metallic form, and three metastable forms (explosive, black, and yellow). Elemental antimony is a brittle, silver-white, shiny metalloid. When slowly cooled, molten antimony crystallizes into a trigonal cell, isomorphic with the gray allotrope of arsenic. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs. Black antimony is formed upon rapid cooling of antimony vapor. It has the same crystal structure as red phosphorus and black arsenic; it oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The yellow allotrope of antimony is the most unstable; it has been generated only by oxidation of stibine (SbH3) at −90 °C. Above this temperature and in ambient light, this metastable allotrope transforms into the more stable black allotrope.
Elemental antimony adopts a layered structure (space group Rm No. 166) whose layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony.
Isotopes
Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Antimony is the lightest element to have an isotope with an alpha decay branch, excluding 8Be and other light nuclides with beta-delayed alpha emission.
Occurrence
The abundance of antimony in the Earth's crust is estimated at 0.2 parts per million, comparable to thallium at 0.5 parts per million and silver at 0.07 ppm. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral.
Compounds
Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more common.
Oxides and hydroxides
Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts.
Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides.
The most important antimony ore is stibnite (). Other sulfide minerals include pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric, which features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and .
Halides
Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry.
The trifluoride is prepared by the reaction of with HF:
+ 6 HF → 2 + 3
It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid:
+ 6 HCl → 2 + 3
Arsenic sulfides are not readily attacked by the hydrochloric acid, so this method offers a route to As-free Sb.
The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7").
Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and .
Antimonides, hydrides, and organoantimony compounds
Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, :
+ 3 →
Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly.
Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include triphenylstibine (Sb(C6H5)3) and pentaphenylantimony (Sb(C6H5)5).
History
Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented.
An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable."
The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable."
The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony.
The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony.
Antimony was frequently described in alchemical manuscripts, including the Summa Perfectionis of Pseudo-Geber, written around the 14th century. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio.
The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface.
With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals.
The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden.
Etymology
The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is antimonium. The origin of this is uncertain; all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French antimoine, still has adherents; this would mean "monk-killer", and is explained by many early alchemists being monks, and antimony being poisonous. However, the low toxicity of antimony (see below) makes this unlikely.
Another popular etymology is the hypothetical Greek word ἀντίμόνος antimonos, "against aloneness", explained as "not found as metal", or "not found unalloyed". Edmund Oscar von Lippmann conjectured a hypothetical Greek word ανθήμόνιον anthemonion, which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence.
The early uses of antimonium include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe antimonium is a scribal corruption of some Arabic form; Meyerhof derives it from ithmid; other possibilities include athimar, the Arabic name of the metalloid, and a hypothetical as-stimmi, derived from or parallel to the Greek.
The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from stibium.
The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony.
The Egyptians called antimony mśdmt or stm.
The Arabic word for the substance, as opposed to the cosmetic, can appear as إثمد ithmid, athmoud, othmod, or uthmod. Littré suggests the first form, which is the earliest, derives from stimmida, an accusative for stimmi. The Greek word, στίμμι (stimmi) is used by Attic tragic poets of the 5th century BC, and is possibly a loan word from Arabic or from Egyptian stm.
Production
Process
The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron:
+ 3 Fe → 2 Sb + 3 FeS
The sulfide is converted to an oxide by roasting. The product is further purified by vaporizing the volatile antimony(III) oxide, which is recovered. This sublimate is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction:
2 + 3 C → 4 Sb + 3
The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces.
Top producers and production volumes
In 2022, according to the US Geological Survey, China accounted for 54.5% of total antimony production, followed in second place by Russia with 18.2% and Tajikistan with 15.5%.
Chinese production of antimony is expected to decline in the future as mines and smelters are closed down by the government as part of pollution control. Especially due to an environmental protection law having gone into effect in January 2015 and revised "Emission Standards of Pollutants for Stanum, Antimony, and Mercury" having gone into effect, hurdles for economic production are higher.
Reported production of antimony in China has fallen and is unlikely to increase in the coming years, according to the Roskill report. No significant antimony deposits in China have been developed for about ten years, and the remaining economic reserves are being rapidly depleted.
Reserves
Supply risk
For antimony-importing regions such as Europe and the U.S., antimony is considered to be a critical mineral for industrial manufacturing that is at risk of supply chain disruption. With global production coming mainly from China (74%), Tajikistan (8%), and Russia (4%), these sources are critical to supply.
European Union: Antimony is considered a critical raw material for defense, automotive, construction and textiles. The E.U. sources are 100% imported, coming mainly from Turkey (62%), Bolivia (20%) and Guatemala (7%).
United Kingdom: The British Geological Survey's 2015 risk list ranks antimony second highest (after rare earth elements) on the relative supply risk index.
United States: Antimony is a mineral commodity considered critical to the economic and national security. In 2022, no antimony was mined in the U.S.
Applications
Approximately 48% of antimony is consumed in flame retardants, 33% in lead–acid batteries, and 8% in plastics.
Flame retardants
Antimony is mainly used as the trioxide for flame-proofing compounds, always in combination with halogenated flame retardants except in halogen-containing polymers. The flame retarding effect of antimony trioxide is produced by the formation of halogenated antimony compounds, which react with hydrogen atoms, and probably also with oxygen atoms and OH radicals, thus inhibiting fire. Markets for these flame-retardants include children's clothing, toys, aircraft, and automobile seat covers. They are also added to polyester resins in fiberglass composites for such items as light aircraft engine covers. The resin will burn in the presence of an externally generated flame, but will extinguish when the external flame is removed.
Alloys
Antimony forms a highly useful alloy with lead, increasing its hardness and mechanical strength. For most applications involving lead, varying amounts of antimony are used as alloying metal. In lead–acid batteries, this addition improves plate strength and charging characteristics. For sailboats, lead keels are used to provide righting moment, ranging from 600 lbs to over 200 tons for the largest sailing superyachts; to improve hardness and tensile strength of the lead keel, antimony is mixed with lead between 2% and 5% by volume. Antimony is used in antifriction alloys (such as Babbitt metal), in bullets and lead shot, electrical cable sheathing, type metal (for example, for linotype printing machines), solder (some "lead-free" solders contain 5% Sb), in pewter, and in hardening alloys with low tin content in the manufacturing of organ pipes.
Other applications
Three other applications consume nearly all the rest of the world's supply. One application is as a stabilizer and catalyst for the production of polyethylene terephthalate. Another is as a fining agent to remove microscopic bubbles in glass, mostly for TV screens antimony ions interact with oxygen, suppressing the tendency of the latter to form bubbles. The third application is pigments.
In the 1990s antimony was increasingly being used in semiconductors as a dopant in n-type silicon wafers for diodes, infrared detectors, and Hall-effect devices. In the 1950s, the emitters and collectors of n-p-n alloy junction transistors were doped with tiny beads of a lead-antimony alloy. Indium antimonide (InSb) is used as a material for mid-infrared detectors.
Biology and medicine have few uses for antimony. Treatments containing antimony, known as antimonials, are used as emetics. Antimony compounds are used as antiprotozoan drugs. Potassium antimonyl tartrate, or tartar emetic, was once used as an anti-schistosomal drug from 1919 on. It was subsequently replaced by praziquantel. Antimony and its compounds are used in several veterinary preparations, such as anthiomaline and lithium antimony thiomalate, as a skin conditioner in ruminants. Antimony has a nourishing or conditioning effect on keratinized tissues in animals.
Antimony-based drugs, such as meglumine antimoniate, are also considered the drugs of choice for treatment of leishmaniasis in domestic animals. Besides having low therapeutic indices, the drugs have minimal penetration of the bone marrow, where some of the Leishmania amastigotes reside, and curing the disease – especially the visceral form – is very difficult. Elemental antimony as an antimony pill was once used as a medicine. It could be reused by others after ingestion and elimination.
Antimony(III) sulfide is used in the heads of some safety matches. Antimony sulfides help to stabilize the friction coefficient in automotive brake pad materials. Antimony is used in bullets, bullet tracers, paint, glass art, and as an opacifier in enamel. Antimony-124 is used together with beryllium in neutron sources; the gamma rays emitted by antimony-124 initiate the photodisintegration of beryllium. The emitted neutrons have an average energy of 24 keV. Natural antimony is used in startup neutron sources.
Historically, the powder derived from crushed antimony (kohl) has been applied to the eyes with a metal rod and with one's spittle, thought by the ancients to aid in curing eye infections. The practice is still seen in Yemen and in other Muslim countries.
Precautions
Antimony and many of its compounds are toxic, and the effects of antimony poisoning are similar to arsenic poisoning. The toxicity of antimony is far lower than that of arsenic; this might be caused by the significant differences of uptake, metabolism and excretion between arsenic and antimony. The uptake of antimony(III) or antimony(V) in the gastrointestinal tract is at most 20%. Antimony(V) is not quantitatively reduced to antimony(III) in the cell (in fact antimony(III) is oxidised to antimony(V) instead).
Since methylation of antimony does not occur, the excretion of antimony(V) in urine is the main way of elimination. Like arsenic, the most serious effect of acute antimony poisoning is cardiotoxicity and the resulted myocarditis, however it can also manifest as Adams–Stokes syndrome which arsenic does not. Reported cases of intoxication by antimony equivalent to 90 mg antimony potassium tartrate dissolved from enamel has been reported to show only short term effects. An intoxication with 6 g of antimony potassium tartrate was reported to result in death after 3 days.
Inhalation of antimony dust is harmful and in certain cases may be fatal; in small doses, antimony causes headaches, dizziness, and depression. Larger doses such as prolonged skin contact may cause dermatitis, or damage the kidneys and the liver, causing violent and frequent vomiting, leading to death in a few days.
Antimony is incompatible with strong oxidizing agents, strong acids, halogen acids, chlorine, or fluorine. It should be kept away from heat.
Antimony leaches from polyethylene terephthalate (PET) bottles into liquids. While levels observed for bottled water are below drinking water guidelines, fruit juice concentrates (for which no guidelines are established) produced in the UK were found to contain up to 44.7 µg/L of antimony, well above the EU limits for tap water of 5 µg/L. The guidelines are:
World Health Organization: 20 µg/L
Japan: 15 µg/L
United States Environmental Protection Agency, Health Canada and the Ontario Ministry of Environment: 6 µg/L
EU and German Federal Ministry of Environment: 5 µg/L
The tolerable daily intake (TDI) proposed by WHO is 6 µg antimony per kilogram of body weight. The immediately dangerous to life or health (IDLH) value for antimony is 50 mg/m3.
Toxicity
Certain compounds of antimony appear to be toxic, particularly antimony trioxide and antimony potassium tartrate. Effects may be similar to arsenic poisoning. Occupational exposure may cause respiratory irritation, pneumoconiosis, antimony spots on the skin, gastrointestinal symptoms, and cardiac arrhythmias. In addition, antimony trioxide is potentially carcinogenic to humans.
Adverse health effects have been observed in humans and animals following inhalation, oral, or dermal exposure to antimony and antimony compounds. Antimony toxicity typically occurs either due to occupational exposure, during therapy or from accidental ingestion. It is unclear if antimony can enter the body through the skin. The presence of low levels of antimony in saliva may also be associated with dental decay.
See also
Phase change memory
Notes
References
Cited sources
External links
Public Health Statement for Antimony
International Antimony Association vzw (i2a)
Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Antimony
Antimony at The Periodic Table of Videos (University of Nottingham)
CDC – NIOSH Pocket Guide to Chemical Hazards – Antimony
Antimony Mineral data and specimen images
Chemical elements
Metalloids
Native element minerals
Nuclear materials
Pnictogens
Trigonal minerals
Minerals in space group 166
Materials that expand upon freezing
Chemical elements with rhombohedral structure
|
https://en.wikipedia.org/wiki/Actinium
|
Actinium is a chemical element with the symbol Ac and atomic number 89. It was first isolated by Friedrich Oskar Giesel in 1902, who gave it the name emanium; the element got its name by being wrongly identified with a substance André-Louis Debierne found in 1899 and called actinium. Actinium gave the name to the actinide series, a set of 15 elements between actinium and lawrencium in the periodic table. Together with polonium, radium, and radon, actinium was one of the first non-primordial radioactive elements to be isolated.
A soft, silvery-white radioactive metal, actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that prevents further oxidation. As with most lanthanides and many actinides, actinium assumes oxidation state +3 in nearly all its chemical compounds. Actinium is found only in traces in uranium and thorium ores as the isotope 227Ac, which decays with a half-life of 21.772 years, predominantly emitting beta and sometimes alpha particles, and 228Ac, which is beta active with a half-life of 6.15 hours. One tonne of natural uranium in ore contains about 0.2 milligrams of actinium-227, and one tonne of thorium contains about 5 nanograms of actinium-228. The close similarity of physical and chemical properties of actinium and lanthanum makes separation of actinium from the ore impractical. Instead, the element is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. Owing to its scarcity, high price and radioactivity, actinium has no significant industrial use. Its current applications include a neutron source and an agent for radiation therapy.
History
André-Louis Debierne, a French chemist, announced the discovery of a new element in 1899. He separated it from pitchblende residues left by Marie and Pierre Curie after they had extracted radium. In 1899, Debierne described the substance as similar to titanium and (in 1900) as similar to thorium. Friedrich Oskar Giesel found in 1902 a substance similar to lanthanum and called it "emanium" in 1904. After a comparison of the substances' half-lives determined by Debierne, Harriet Brooks in 1904, and Otto Hahn and Otto Sackur in 1905, Debierne's chosen name for the new element was retained because it had seniority, despite the contradicting chemical properties he claimed for the element at different times.
Articles published in the 1970s and later suggest that Debierne's results published in 1904 conflict with those reported in 1899 and 1900. Furthermore, the now-known chemistry of actinium precludes its presence as anything other than a minor constituent of Debierne's 1899 and 1900 results; in fact, the chemical properties he reported make it likely that he had, instead, accidentally identified protactinium, which would not be discovered for another fourteen years, only to have it disappear due to its hydrolysis and adsorption onto his laboratory equipment. This has led some authors to advocate that Giesel alone should be credited with the discovery. A less confrontational vision of scientific discovery is proposed by Adloff. He suggests that hindsight criticism of the early publications should be mitigated by the then nascent state of radiochemistry: highlighting the prudence of Debierne's claims in the original papers, he notes that nobody can contend that Debierne's substance did not contain actinium. Debierne, who is now considered by the vast majority of historians as the discoverer, lost interest in the element and left the topic. Giesel, on the other hand, can rightfully be credited with the first preparation of radiochemically pure actinium and with the identification of its atomic number 89.
The name actinium originates from the Ancient Greek aktis, aktinos (ακτίς, ακτίνος), meaning beam or ray. Its symbol Ac is also used in abbreviations of other compounds that have nothing to do with actinium, such as acetyl, acetate and sometimes acetaldehyde.
Properties
Actinium is a soft, silvery-white, radioactive, metallic element. Its estimated shear modulus is similar to that of lead. Owing to its strong radioactivity, actinium glows in the dark with a pale blue light, which originates from the surrounding air ionized by the emitted energetic particles. Actinium has similar chemical properties to lanthanum and other lanthanides, and therefore these elements are difficult to separate when extracting from uranium ores. Solvent extraction and ion chromatography are commonly used for the separation.
The first element of the actinides, actinium gave the set its name, much as lanthanum had done for the lanthanides. The actinides are much more diverse than the lanthanides and therefore it was not until 1945 that the most significant change to Dmitri Mendeleev's periodic table since the recognition of the lanthanides, the introduction of the actinides, was generally accepted after Glenn T. Seaborg's research on the transuranium elements (although it had been proposed as early as 1892 by British chemist Henry Bassett).
Actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that impedes further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn] 6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. Although the 5f orbitals are unoccupied in an actinium atom, it can be used as a valence orbital in actinium complexes and hence it is generally considered the first 5f element by authors working on it. Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules.
Chemical compounds
Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3, AcPO4 and Ac(NO3)3. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent.
Here a, b and c are lattice constants, No is space group number and Z is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters.
Oxides
Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at 500 °C or the oxalate at 1100 °C, in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals.
Halides
Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at 700 °C in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at 900–1000 °C yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at 800 °C for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product.
AcF3 + 2 NH3 + H2O → AcOF + 2 NH4F
Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above 960 °C. Similar to oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at 1000 °C. However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia.
Reaction of aluminium bromide and actinium oxide yields actinium tribromide:
Ac2O3 + 2 AlBr3 → 2 AcBr3 + Al2O3
and treating it with ammonium hydroxide at 500 °C results in the oxybromide AcOBr.
Other compounds
Actinium hydride was obtained by reduction of actinium trichloride with potassium at 300 °C, and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain.
Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at 1400 °C for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at 1000 °C.
Isotopes
Naturally occurring actinium is composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-three radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac.
Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 204 u () to 236 u ().
Occurrence and synthesis
Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U.
The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor.
^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac
The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant.
225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac.
Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between 1100 and 1300 °C. Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile.
Applications
Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies.
227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations.
225Ac is applied in medicine to produce in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than stable but toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty with application of 225Ac was that intravenous injection of simple actinium complexes resulted in their accumulation in the bones and liver for a period of tens of years. As a result, after the cancer cells were quickly killed by alpha particles from 225Ac, the radiation from the actinium and its daughters might induce new mutations. To solve this problem, 225Ac was bound to a chelating agent, such as citrate, ethylenediaminetetraacetic acid (EDTA) or diethylene triamine pentaacetic acid (DTPA). This reduced actinium accumulation in the bones, but the excretion from the body remained slow. Much better results were obtained with such chelating agents as HEHA () or DOTA () coupled to trastuzumab, a monoclonal antibody that interferes with the HER2/neu receptor. The latter delivery combination was tested on mice and proved to be effective against leukemia, lymphoma, breast, ovarian, neuroblastoma and prostate cancers.
The medium half-life of 227Ac (21.77 years) makes it very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior.
There are theoretical predictions that AcHx hydrides (in this case with very high pressure) are a candidate for a near room-temperature superconductor as they have Tc significantly higher than H3S, possibly near 250 K.
Precautions
227Ac is highly radioactive and experiments with it are carried out in a specially designed laboratory equipped with a tight glove box. When actinium trichloride is administered intravenously to rats, about 33% of actinium is deposited into the bones and 50% into the liver. Its toxicity is comparable to, but slightly lower than that of americium and plutonium. For trace quantities, fume hoods with good aeration suffice; for gram amounts, hot cells with shielding from the intense gamma radiation emitted by 227Ac are necessary.
See also
Actinium series
Notes
References
Bibliography
Meyer, Gerd and Morss, Lester R. (1991) Synthesis of lanthanide and actinide compounds, Springer.
External links
Actinium at The Periodic Table of Videos (University of Nottingham)
NLM Hazardous Substances Databank – Actinium, Radioactive
Actinium in
Chemical elements
Chemical elements with face-centered cubic structure
Actinides
|
https://en.wikipedia.org/wiki/Americium
|
Americium is a synthetic radioactive chemical element with the symbol Am and atomic number 95. It is a transuranic member of the actinide series, in the periodic table located under the lanthanide element europium and was thus named after the United States by analogy.
Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer.
Americium is a relatively soft radioactive metal with silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattices of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples.
History
Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series."
The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).
Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years.
The times are half-lives
The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h.
The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C.
Occurrence
The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of nuclear reactions, though this has not been confirmed.
Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland.
In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries per gram (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils.
Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Americium is also one of the elements that have theoretically been detected in Przybylski's Star.
Synthesis and extraction
Isotope nucleosynthesis
Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order .
Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process:
^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu
The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am:
^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am
The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it beta-decays to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years.
The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm:
Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux:
^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am
Metal generation
Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away.
Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten.
An alternative is the reduction of americium dioxide by metallic lanthanum or thorium:
Physical properties
In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C).
At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium.
As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 µOhm·cm to 10 µOhm·cm after 40 hours, and saturates at about 16 µOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 µOhm·cm at liquid helium to 69 µOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium.
Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is .
Chemical properties
Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states 2, 4, 5, 6 and 7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), ; (yellow), (brown) and (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm.
Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state.
The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction
is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like and are comparable to uranates and the ion is comparable to the uranyl ion, . Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate.
Chemical compounds
Oxygen compounds
Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure.
The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L.
Halides
Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions.
Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are:
Orthorhombic AmCl2: a = , b = and c =
Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I:
{Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg}
Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions:
Am^3+ + 3F^- -> AmF3(v)
The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine:
2AmF3 + F2 -> 2AmF4
Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles.
Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure.
Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis:
AmCl3 + H2O -> AmOCl + 2HCl
Chalcogenides and pnictides
The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice.
Silicides and borides
Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group I41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere.
Organoamericium compounds
Analogous to uranocene, americium forms the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3.
Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides.
Biological aspects
Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus Citrobacter precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi.
Fission
The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of the two readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors.
There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals.
Isotopes
About 19 isotopes and 8 nuclear isomers are known for americium. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass.
Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV.
Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U.
Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U.
Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle.
Applications
Ionization-type smoke detector
Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation.
The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms.
Radionuclide
As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes.
Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator.
One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer.
In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function.
Neutron source
The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{241}_{95}Am -> ^{237}_{93}Np + ^{4}_{2}He + \gamma
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations.
Production of other elements
Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm:
^{243}_{95}Am ->[\ce{(n,\gamma)}] ^{244}_{95}Am ->[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm
Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O.
Spectrometer
Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete.
Health concerns
As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth.
If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity.
Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease.
See also
Actinides in the environment
:Category:Americium compounds
Notes
References
Bibliography
Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960
Further reading
Nuclides and Isotopes – 14th Edition, GE Nuclear Energy, 1989.
External links
Americium at The Periodic Table of Videos (University of Nottingham)
ATSDR – Public Health Statement: Americium
World Nuclear Association – Smoke Detectors and Americium
Chemical elements
Chemical elements with double hexagonal close-packed structure
Actinides
Carcinogens
Synthetic elements
|
https://en.wikipedia.org/wiki/Astatine
|
Astatine is a chemical element with the symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. Consequently, a solid sample of the element has never been seen, because any macroscopic specimen would be immediately vaporized by the heat of its radioactivity.
The bulk properties of astatine are not known with certainty. Many of them have been estimated from its position on the periodic table as a heavier analog of iodine, and a member of the halogens (the group of elements including fluorine, chlorine, bromine, iodine and tennessine). However, astatine also falls roughly along the dividing line between metals and nonmetals, and some metallic behavior has also been observed and predicted for it. Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine, but it also sometimes displays metallic characteristics and shows some similarities to silver.
The first synthesis of astatine was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley. They named it from the Ancient Greek () 'unstable'. Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope, astatine-210, nor the medically useful astatine-211 occur naturally; they are usually produced by bombarding bismuth-209 with alpha particles.
Characteristics
Astatine is an extremely radioactive element; all its isotopes have half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium, or radon. Most of its isotopes are very unstable, with half-lives of seconds or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than the longest-lived francium isotopes are in any case synthetic and do not occur in nature.
The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted.
Physical
Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow-green, bromine is red-brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal).
Astatine sublimes less readily than does iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions.
The structure of solid astatine is unknown. As an analog of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure, it may well be a superconductor, like the similar high-pressure phase of iodine. Metallic astatine is expected to have a density of 8.91–8.95 g/cm3.
Evidence for (or against) the existence of diatomic astatine (At2) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy , and heat of vaporization (∆Hvap) 54.39 kJ/mol. Many values have been predicted for the melting and boiling points of astatine, but only for At2.
Chemical
The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions". Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10−10 mol·L−1. Some properties, such as anion formation, align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, and coprecipitating with metal sulfides in hydrochloric acid. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects, astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. It has been suggested that astatine can form a stable monatomic cation in aqueous solution.
Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt), the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures. That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine, at 233 kJ mol−1, is 21% less than that of iodine. In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionization energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionization energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008).
Compounds
Less reactive than iodine, astatine is the least reactive of the halogens; the chemical properties of tennessine, the next-heavier group 17 element, have not yet been investigated, however. Astatine compounds have been synthesized in nano-scale amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7.
Only a few compounds with metals have been reported, in the form of states of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides.
The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At0 or At+ forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide.
Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms.
With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid. The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate.
Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium.
Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride.
History
In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit eka – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries.
The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine, and astatine's radioactivity would have prevented him from handling it in the quantities he claimed. Moreover, astatine is not found in the thorium series, and the true identity of dakin is not known.
In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 by observing its X-ray emission lines. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine-218, his means to detect it were too weak, by current standards, to enable correct identification; moreover, he could not perform chemical tests on the element. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work.
In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results.
Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Ancient Greek () meaning , because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element.
Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine … it also exhibits metallic properties, more like its metallic neighbors Po and Bi."
Isotopes
There are 41 known isotopes of astatine, with mass numbers of 188 and 190–229. Theoretical modeling suggests that about 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist.
Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. A beta decay mode has been found for all other astatine isotopes except for astatine-213, astatine-214, and astatine-216m. Astatine-210 and lighter isotopes exhibit beta plus decay (positron emission), astatine-216 and heavier isotopes exhibit beta minus decay, and astatine-212 decays via both modes, while astatine-211 undergoes electron capture.
The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209.
Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-214m1; its half-life of 265 nanoseconds is shorter than those of all ground states except that of astatine-213.
Natural occurrence
Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce (about 28 grams).
Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes.
Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed.
Synthesis
Formation
Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 giga becquerels (about 86 nanograms or 2.47 × 1014 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method.
The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. In order to eliminate undesired nuclides, the maximum energy of the particle accelerator is set to a value (optimally 29.17 MeV) above that for the reaction producing astatine-211 (to produce the desired isotope) and below the one producing astatine-210 (to avoid producing other astatine isotopes).
Separation methods
Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam.
Dry
The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine.
Wet
The irradiated bismuth (or sometimes bismuth trioxide) target is first dissolved in, for example, concentrated nitric or perchloric acid. Following this first step, the acid can be distilled away to leave behind a white residue that contains both bismuth and the desired astatine product. This residue is then dissolved in a concentrated acid, such as hydrochloric acid. Astatine is extracted from this acid using an organic solvent such as dibutyl ether, diisopropyl ether (DIPE), or thiosemicarbazide. Using liquid-liquid extraction, the astatine product can be repeatedly washed with an acid, such as HCl, and extracted into the organic solvent layer. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and have not been considered well suited for isolating larger quantities of astatine. However, wet extraction methods are being examined for use in production of larger quantities of astatine-211, as it is thought that wet extraction methods can provide more consistency. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry.
Uses and precautions
{| class="wikitable"
|+ Several 211At-containing molecules and their experimental uses
! Agent
! Applications
|-
| [211At]astatine-tellurium colloids
| Compartmental tumors
|-
| 6-[211At]astato-2-methyl-1,4-naphtaquinol diphosphate
| Adenocarcinomas
|-
| 211At-labeled methylene blue
| Melanomas
|-
| Meta-[211At]astatobenzyl guanidine
| Neuroendocrine tumors
|-
| 5-[211At]astato-2'-deoxyuridine
| Various
|-
| 211At-labeled biotin conjugates
| Various pretargeting
|-
| 211At-labeled octreotide
| Somatostatin receptor
|-
| 211At-labeled monoclonal antibodies and fragments
| Various
|-
| 211At-labeled bisphosphonates
| Bone metastases
|}
Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha-particle therapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210.
The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 µm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell.
Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp3 carbon sites (less so from sp2 sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue.
Animal studies show that astatine, similarly to iodine—although to a lesser extent, perhaps because of its slightly more metallic nature—is preferentially (and dangerously) concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At– to At+. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided.
See also
Radiation protection
Notes
References
Bibliography
External links
Astatine at The Periodic Table of Videos (University of Nottingham)
Astatine: Halogen or Metal?
Chemical elements
Chemical elements with face-centered cubic structure
Halogens
Synthetic elements
|
https://en.wikipedia.org/wiki/Aluminium
|
Aluminium (aluminum in North American English) is a chemical element with the symbol Al and atomic number 13. Aluminium has a density lower than those of other common metals; about one-third that of steel. It has a great affinity towards oxygen, forming a protective layer of oxide on the surface when exposed to air. Aluminium visually resembles silver, both in its color and in its great ability to reflect light. It is soft, nonmagnetic and ductile. It has one stable isotope: 27Al, which is highly abundant, making aluminium the twelfth-most common element in the universe. The radioactivity of 26Al is used in radiometric dating.
Chemically, aluminium is a post-transition metal in the boron group; as is common for the group, aluminium forms compounds primarily in the +3 oxidation state. The aluminium cation Al3+ is small and highly charged; as such, it has more polarizing power, and bonds formed by aluminium have a more covalent character. The strong affinity of aluminium for oxygen leads to the common occurrence of its oxides in nature. Aluminium is found on Earth primarily in rocks in the crust, where it is the third-most abundant element, after oxygen and silicon, rather than in the mantle, and virtually never as the free metal. It is obtained industrially by mining bauxite, a sedimentary rock rich in aluminium minerals.
The discovery of aluminium was announced in 1825 by Danish physicist Hans Christian Ørsted. The first industrial production of aluminium was initiated by French chemist Henri Étienne Sainte-Claire Deville in 1856. Aluminium became much more available to the public with the Hall–Héroult process developed independently by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886, and the mass production of aluminium led to its extensive use in industry and everyday life. In World Wars I and II, aluminium was a crucial strategic resource for aviation. In 1954, aluminium became the most produced non-ferrous metal, surpassing copper. In the 21st century, most aluminium was consumed in transportation, engineering, construction, and packaging in the United States, Western Europe, and Japan.
Despite its prevalence in the environment, no living organism is known to use aluminium salts for metabolism, but aluminium is well tolerated by plants and animals. Because of the abundance of these salts, the potential for a biological role for them is of interest, and studies continue.
Physical characteristics
Isotopes
Of aluminium isotopes, only is stable. This situation is common for elements with an odd atomic number. It is the only primordial aluminium isotope, i.e. the only one that has existed on Earth in its current form since the formation of the planet. It is therefore a mononuclidic element and its standard atomic weight is virtually the same as that of the isotope. This makes aluminium very useful in nuclear magnetic resonance (NMR), as its single stable isotope has a high NMR sensitivity. The standard atomic weight of aluminium is low in comparison with many other metals.
All other isotopes of aluminium are radioactive. The most stable of these is 26Al: while it was present along with stable 27Al in the interstellar medium from which the Solar System formed, having been produced by stellar nucleosynthesis as well, its half-life is only 717,000 years and therefore a detectable amount has not survived since the formation of the planet. However, minute traces of 26Al are produced from argon in the atmosphere by spallation caused by cosmic ray protons. The ratio of 26Al to 10Be has been used for radiodating of geological processes over 105 to 106 year time scales, in particular transport, deposition, sediment storage, burial times, and erosion. Most meteorite scientists believe that the energy released by the decay of 26Al was responsible for the melting and differentiation of some asteroids after their formation 4.55 billion years ago.
The remaining isotopes of aluminium, with mass numbers ranging from 22 to 43, all have half-lives well under an hour. Three metastable states are known, all with half-lives under a minute.
Electron shell
An aluminium atom has 13 electrons, arranged in an electron configuration of , with three electrons beyond a stable noble gas configuration. Accordingly, the combined first three ionization energies of aluminium are far lower than the fourth ionization energy alone. Such an electron configuration is shared with the other well-characterized members of its group, boron, gallium, indium, and thallium; it is also expected for nihonium. Aluminium can surrender its three outermost electrons in many chemical reactions (see below). The electronegativity of aluminium is 1.61 (Pauling scale).
A free aluminium atom has a radius of 143 pm. With the three outermost electrons removed, the radius shrinks to 39 pm for a 4-coordinated atom or 53.5 pm for a 6-coordinated atom. At standard temperature and pressure, aluminium atoms (when not affected by atoms of other elements) form a face-centered cubic crystal system bound by metallic bonding provided by atoms' outermost electrons; hence aluminium (at these conditions) is a metal. This crystal system is shared by many other metals, such as lead and copper; the size of a unit cell of aluminium is comparable to that of those other metals. The system, however, is not shared by the other members of its group; boron has ionization energies too high to allow metallization, thallium has a hexagonal close-packed structure, and gallium and indium have unusual structures that are not close-packed like those of aluminium and thallium. The few electrons that are available for metallic bonding in aluminium metal are a probable cause for it being soft with a low melting point and low electrical resistivity.
Bulk
Aluminium metal has an appearance ranging from silvery white to dull gray, depending on the surface roughness. Aluminium mirrors are the most reflective of all metal mirrors for the near ultraviolet and far infrared light, and one of the most reflective in the visible spectrum, nearly on par with silver, and the two therefore look similar. Aluminium is also good at reflecting solar radiation, although prolonged exposure to sunlight in air adds wear to the surface of the metal; this may be prevented if aluminium is anodized, which adds a protective layer of oxide on the surface.
The density of aluminium is 2.70 g/cm3, about 1/3 that of steel, much lower than other commonly encountered metals, making aluminium parts easily identifiable through their lightness. Aluminium's low density compared to most other metals arises from the fact that its nuclei are much lighter, while difference in the unit cell size does not compensate for this difference. The only lighter metals are the metals of groups 1 and 2, which apart from beryllium and magnesium are too reactive for structural use (and beryllium is very toxic). Aluminium is not as strong or stiff as steel, but the low density makes up for this in the aerospace industry and for many other applications where light weight and relatively high strength are crucial.
Pure aluminium is quite soft and lacking in strength. In most applications various aluminium alloys are used instead because of their higher strength and hardness. The yield strength of pure aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminium is ductile, with a percent elongation of 50-70%, and malleable allowing it to be easily drawn and extruded. It is also easily machined and cast.
Aluminium is an excellent thermal and electrical conductor, having around 60% the conductivity of copper, both thermal and electrical, while having only 30% of copper's density. Aluminium is capable of superconductivity, with a superconducting critical temperature of 1.2 kelvin and a critical magnetic field of about 100 gauss (10 milliteslas). It is paramagnetic and thus essentially unaffected by static magnetic fields. The high electrical conductivity, however, means that it is strongly affected by alternating magnetic fields through the induction of eddy currents.
Chemistry
Aluminium combines characteristics of pre- and post-transition metals. Since it has few available electrons for metallic bonding, like its heavier group 13 congeners, it has the characteristic physical properties of a post-transition metal, with longer-than-expected interatomic distances. Furthermore, as Al3+ is a small and highly charged cation, it is strongly polarizing and bonding in aluminium compounds tends towards covalency; this behavior is similar to that of beryllium (Be2+), and the two display an example of a diagonal relationship.
The underlying core under aluminium's valence shell is that of the preceding noble gas, whereas those of its heavier congeners gallium, indium, thallium, and nihonium also include a filled d-subshell and in some cases a filled f-subshell. Hence, the inner electrons of aluminium shield the valence electrons almost completely, unlike those of aluminium's heavier congeners. As such, aluminium is the most electropositive metal in its group, and its hydroxide is in fact more basic than that of gallium. Aluminium also bears minor similarities to the metalloid boron in the same group: AlX3 compounds are valence isoelectronic to BX3 compounds (they have the same valence electronic structure), and both behave as Lewis acids and readily form adducts. Additionally, one of the main motifs of boron chemistry is regular icosahedral structures, and aluminium forms an important part of many icosahedral quasicrystal alloys, including the Al–Zn–Mg class.
Aluminium has a high chemical affinity to oxygen, which renders it suitable for use as a reducing agent in the thermite reaction. A fine powder of aluminium metal reacts explosively on contact with liquid oxygen; under normal conditions, however, aluminium forms a thin oxide layer (~5 nm at room temperature) that protects the metal from further corrosion by oxygen, water, or dilute acid, a process termed passivation. Because of its general resistance to corrosion, aluminium is one of the few metals that retains silvery reflectance in finely powdered form, making it an important component of silver-colored paints. Aluminium is not attacked by oxidizing acids because of its passivation. This allows aluminium to be used to store reagents such as nitric acid, concentrated sulfuric acid, and some organic acids.
In hot concentrated hydrochloric acid, aluminium reacts with water with evolution of hydrogen, and in aqueous sodium hydroxide or potassium hydroxide at room temperature to form aluminates—protective passivation under these conditions is negligible. Aqua regia also dissolves aluminium. Aluminium is corroded by dissolved chlorides, such as common sodium chloride, which is why household plumbing is never made from aluminium. The oxide layer on aluminium is also destroyed by contact with mercury due to amalgamation or with salts of some electropositive metals. As such, the strongest aluminium alloys are less corrosion-resistant due to galvanic reactions with alloyed copper, and aluminium's corrosion resistance is greatly reduced by aqueous salts, particularly in the presence of dissimilar metals.
Aluminium reacts with most nonmetals upon heating, forming compounds such as aluminium nitride (AlN), aluminium sulfide (Al2S3), and the aluminium halides (AlX3). It also forms a wide range of intermetallic compounds involving metals from every group on the periodic table.
Inorganic compounds
The vast majority of compounds, including all aluminium-containing minerals and all commercially significant aluminium compounds, feature aluminium in the oxidation state 3+. The coordination number of such compounds varies, but generally Al3+ is either six- or four-coordinate. Almost all compounds of aluminium(III) are colorless.
In aqueous solution, Al3+ exists as the hexaaqua cation [Al(H2O)6]3+, which has an approximate Ka of 10−5. Such solutions are acidic as this cation can act as a proton donor and progressively hydrolyze until a precipitate of aluminium hydroxide, Al(OH)3, forms. This is useful for clarification of water, as the precipitate nucleates on suspended particles in the water, hence removing them. Increasing the pH even further leads to the hydroxide dissolving again as aluminate, [Al(H2O)2(OH)4]−, is formed.
Aluminium hydroxide forms both salts and aluminates and dissolves in acid and alkali, as well as on fusion with acidic and basic oxides. This behavior of Al(OH)3 is termed amphoterism and is characteristic of weakly basic cations that form insoluble hydroxides and whose hydrated species can also donate their protons. One effect of this is that aluminium salts with weak acids are hydrolyzed in water to the aquated hydroxide and the corresponding nonmetal hydride: for example, aluminium sulfide yields hydrogen sulfide. However, some salts like aluminium carbonate exist in aqueous solution but are unstable as such; and only incomplete hydrolysis takes place for salts with strong acids, such as the halides, nitrate, and sulfate. For similar reasons, anhydrous aluminium salts cannot be made by heating their "hydrates": hydrated aluminium chloride is in fact not AlCl3·6H2O but [Al(H2O)6]Cl3, and the Al–O bonds are so strong that heating is not sufficient to break them and form Al–Cl bonds instead:
2[Al(H2O)6]Cl3 Al2O3 + 6 HCl + 9 H2O
All four trihalides are well known. Unlike the structures of the three heavier trihalides, aluminium fluoride (AlF3) features six-coordinate aluminium, which explains its involatility and insolubility as well as high heat of formation. Each aluminium atom is surrounded by six fluorine atoms in a distorted octahedral arrangement, with each fluorine atom being shared between the corners of two octahedra. Such {AlF6} units also exist in complex fluorides such as cryolite, Na3AlF6. AlF3 melts at and is made by reaction of aluminium oxide with hydrogen fluoride gas at .
With heavier halides, the coordination numbers are lower. The other trihalides are dimeric or polymeric with tetrahedral four-coordinate aluminium centers. Aluminium trichloride (AlCl3) has a layered polymeric structure below its melting point of but transforms on melting to Al2Cl6 dimers. At higher temperatures those increasingly dissociate into trigonal planar AlCl3 monomers similar to the structure of BCl3. Aluminium tribromide and aluminium triiodide form Al2X6 dimers in all three phases and hence do not show such significant changes of properties upon phase change. These materials are prepared by treating aluminium metal with the halogen. The aluminium trihalides form many addition compounds or complexes; their Lewis acidic nature makes them useful as catalysts for the Friedel–Crafts reactions. Aluminium trichloride has major industrial uses involving this reaction, such as in the manufacture of anthraquinones and styrene; it is also often used as the precursor for many other aluminium compounds and as a reagent for converting nonmetal fluorides into the corresponding chlorides (a transhalogenation reaction).
Aluminium forms one stable oxide with the chemical formula Al2O3, commonly called alumina. It can be found in nature in the mineral corundum, α-alumina; there is also a γ-alumina phase. Its crystalline form, corundum, is very hard (Mohs hardness 9), has a high melting point of , has very low volatility, is chemically inert, and a good electrical insulator, it is often used in abrasives (such as toothpaste), as a refractory material, and in ceramics, as well as being the starting material for the electrolytic production of aluminium metal. Sapphire and ruby are impure corundum contaminated with trace amounts of other metals. The two main oxide-hydroxides, AlO(OH), are boehmite and diaspore. There are three main trihydroxides: bayerite, gibbsite, and nordstrandite, which differ in their crystalline structure (polymorphs). Many other intermediate and related structures are also known. Most are produced from ores by a variety of wet processes using acid and base. Heating the hydroxides leads to formation of corundum. These materials are of central importance to the production of aluminium and are themselves extremely useful. Some mixed oxide phases are also very useful, such as spinel (MgAl2O4), Na-β-alumina (NaAl11O17), and tricalcium aluminate (Ca3Al2O6, an important mineral phase in Portland cement).
The only stable chalcogenides under normal conditions are aluminium sulfide (Al2S3), selenide (Al2Se3), and telluride (Al2Te3). All three are prepared by direct reaction of their elements at about and quickly hydrolyze completely in water to yield aluminium hydroxide and the respective hydrogen chalcogenide. As aluminium is a small atom relative to these chalcogens, these have four-coordinate tetrahedral aluminium with various polymorphs having structures related to wurtzite, with two-thirds of the possible metal sites occupied either in an orderly (α) or random (β) fashion; the sulfide also has a γ form related to γ-alumina, and an unusual high-temperature hexagonal form where half the aluminium atoms have tetrahedral four-coordination and the other half have trigonal bipyramidal five-coordination.
Four pnictides – aluminium nitride (AlN), aluminium phosphide (AlP), aluminium arsenide (AlAs), and aluminium antimonide (AlSb) – are known. They are all III-V semiconductors isoelectronic to silicon and germanium, all of which but AlN have the zinc blende structure. All four can be made by high-temperature (and possibly high-pressure) direct reaction of their component elements.
Aluminium alloys well with most other metals (with the exception of most alkali metals and group 13 metals) and over 150 intermetallics with other metals are known. Preparation involves heating fixed metals together in certain proportion, followed by gradual cooling and annealing. Bonding in them is predominantly metallic and the crystal structure primarily depends on efficiency of packing.
There are few compounds with lower oxidation states. A few aluminium(I) compounds exist: AlF, AlCl, AlBr, and AlI exist in the gaseous phase when the respective trihalide is heated with aluminium, and at cryogenic temperatures. A stable derivative of aluminium monoiodide is the cyclic adduct formed with triethylamine, Al4I4(NEt3)4. Al2O and Al2S also exist but are very unstable. Very simple aluminium(II) compounds are invoked or observed in the reactions of Al metal with oxidants. For example, aluminium monoxide, AlO, has been detected in the gas phase after explosion and in stellar absorption spectra. More thoroughly investigated are compounds of the formula R4Al2 which contain an Al–Al bond and where R is a large organic ligand.
Organoaluminium compounds and related hydrides
A variety of compounds of empirical formula AlR3 and AlR1.5Cl1.5 exist. The aluminium trialkyls and triaryls are reactive, volatile, and colorless liquids or low-melting solids. They catch fire spontaneously in air and react with water, thus necessitating precautions when handling them. They often form dimers, unlike their boron analogues, but this tendency diminishes for branched-chain alkyls (e.g. Pri, Bui, Me3CCH2); for example, triisobutylaluminium exists as an equilibrium mixture of the monomer and dimer. These dimers, such as trimethylaluminium (Al2Me6), usually feature tetrahedral Al centers formed by dimerization with some alkyl group bridging between both aluminium atoms. They are hard acids and react readily with ligands, forming adducts. In industry, they are mostly used in alkene insertion reactions, as discovered by Karl Ziegler, most importantly in "growth reactions" that form long-chain unbranched primary alkenes and alcohols, and in the low-pressure polymerization of ethene and propene. There are also some heterocyclic and cluster organoaluminium compounds involving Al–N bonds.
The industrially most important aluminium hydride is lithium aluminium hydride (LiAlH4), which is used in as a reducing agent in organic chemistry. It can be produced from lithium hydride and aluminium trichloride. The simplest hydride, aluminium hydride or alane, is not as important. It is a polymer with the formula (AlH3)n, in contrast to the corresponding boron hydride that is a dimer with the formula (BH3)2.
Natural occurrence
Space
Aluminium's per-particle abundance in the Solar System is 3.15 ppm (parts per million). It is the twelfth most abundant of all elements and third most abundant among the elements that have odd atomic numbers, after hydrogen and nitrogen. The only stable isotope of aluminium, 27Al, is the eighteenth most abundant nucleus in the Universe. It is created almost entirely after fusion of carbon in massive stars that will later become Type II supernovas: this fusion creates 26Mg, which, upon capturing free protons and neutrons becomes aluminium. Some smaller quantities of 27Al are created in hydrogen burning shells of evolved stars, where 26Mg can capture free protons. Essentially all aluminium now in existence is 27Al. 26Al was present in the early Solar System with abundance of 0.005% relative to 27Al but its half-life of 728,000 years is too short for any original nuclei to survive; 26Al is therefore extinct. Unlike for 27Al, hydrogen burning is the primary source of 26Al, with the nuclide emerging after a nucleus of 25Mg catches a free proton. However, the trace quantities of 26Al that do exist are the most common gamma ray emitter in the interstellar gas; if the original 26Al were still present, gamma ray maps of the Milky Way would be brighter.
Earth
Overall, the Earth is about 1.59% aluminium by mass (seventh in abundance by mass). Aluminium occurs in greater proportion in the Earth's crust than in the Universe at large, because aluminium easily forms the oxide and becomes bound into rocks and stays in the Earth's crust, while less reactive metals sink to the core. In the Earth's crust, aluminium is the most abundant metallic element (8.23% by mass) and the third most abundant of all elements (after oxygen and silicon). A large number of silicates in the Earth's crust contain aluminium. In contrast, the Earth's mantle is only 2.38% aluminium by mass. Aluminium also occurs in seawater at a concentration of 2 μg/kg.
Because of its strong affinity for oxygen, aluminium is almost never found in the elemental state; instead it is found in oxides or silicates. Feldspars, the most common group of minerals in the Earth's crust, are aluminosilicates. Aluminium also occurs in the minerals beryl, cryolite, garnet, spinel, and turquoise. Impurities in Al2O3, such as chromium and iron, yield the gemstones ruby and sapphire, respectively. Native aluminium metal is extremely rare and can only be found as a minor phase in low oxygen fugacity environments, such as the interiors of certain volcanoes. Native aluminium has been reported in cold seeps in the northeastern continental slope of the South China Sea. It is possible that these deposits resulted from bacterial reduction of tetrahydroxoaluminate Al(OH)4−.
Although aluminium is a common and widespread element, not all aluminium minerals are economically viable sources of the metal. Almost all metallic aluminium is produced from the ore bauxite (AlOx(OH)3–2x). Bauxite occurs as a weathering product of low iron and silica bedrock in tropical climatic conditions. In 2017, most bauxite was mined in Australia, China, Guinea, and India.
History
The history of aluminium has been shaped by usage of alum. The first written record of alum, made by Greek historian Herodotus, dates back to the 5th century BCE. The ancients are known to have used alum as a dyeing mordant and for city defense. After the Crusades, alum, an indispensable good in the European fabric industry, was a subject of international commerce; it was imported to Europe from the eastern Mediterranean until the mid-15th century.
The nature of alum remained unknown. Around 1530, Swiss physician Paracelsus suggested alum was a salt of an earth of alum. In 1595, German doctor and chemist Andreas Libavius experimentally confirmed this. In 1722, German chemist Friedrich Hoffmann announced his belief that the base of alum was a distinct earth. In 1754, German chemist Andreas Sigismund Marggraf synthesized alumina by boiling clay in sulfuric acid and subsequently adding potash.
Attempts to produce aluminium metal date back to 1760. The first successful attempt, however, was completed in 1824 by Danish physicist and chemist Hans Christian Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal looking similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1827, German chemist Friedrich Wöhler repeated Ørsted's experiments but did not identify any aluminium. (The reason for this inconsistency was only discovered in 1921.) He conducted a similar experiment in the same year by mixing anhydrous aluminium chloride with potassium and produced a powder of aluminium. In 1845, he was able to produce small pieces of the metal and described some physical properties of this metal. For many years thereafter, Wöhler was credited as the discoverer of aluminium.
As Wöhler's method could not yield great quantities of aluminium, the metal remained rare; its cost exceeded that of gold. The first industrial production of aluminium was established in 1856 by French chemist Henri Etienne Sainte-Claire Deville and companions. Deville had discovered that aluminium trichloride could be reduced by sodium, which was more convenient and less expensive than potassium, which Wöhler had used. Even then, aluminium was still not of great purity and produced aluminium differed in properties by sample. Because of its electricity-conducting capacity, aluminium was used as the cap of the Washington Monument, completed in 1885. The tallest building in the world at the time, the non-corroding metal cap was intended to serve as a lightning rod peak.
The first industrial large-scale production method was independently developed in 1886 by French engineer Paul Héroult and American engineer Charles Martin Hall; it is now known as the Hall–Héroult process. The Hall–Héroult process converts alumina into metal. Austrian chemist Carl Joseph Bayer discovered a way of purifying bauxite to yield alumina, now known as the Bayer process, in 1889. Modern production of the aluminium metal is based on the Bayer and Hall–Héroult processes.
Prices of aluminium dropped and aluminium became widely used in jewelry, everyday items, eyeglass frames, optical instruments, tableware, and foil in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal with many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes; during World War II, demand by major governments for aviation was even higher.
By the mid-20th century, aluminium had become a part of everyday life and an essential component of housewares. In 1954, production of aluminium surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal. During the mid-20th century, aluminium emerged as a civil engineering material, with building applications in both basic construction and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958.
Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013.
The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity.
Etymology
The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, a naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer".
Origins
British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was created from the English word alum and the Latin suffix -ium; but it was customary then to give elements names originating in Latin, so this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English name alum does not come directly from Latin, whereas alumine/alumina obviously comes from the Latin word alumen (upon declension, alumen changes to alumin-).
One example was Essai sur la Nomenclature chimique (July 1811), written in French by a Swedish chemist, Jöns Jacob Berzelius, in which the name aluminium is given to the element that would be synthesized from alum. (Another article in the same journal issue also gives the name aluminium to the metal whose oxide is the basis of sapphire.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The next year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since. Their usage is regional: aluminum dominates in the United States and Canada; aluminium, in the rest of the English-speaking world.
Spelling
In 1812, a British scientist, Thomas Young, wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he thought had a "less classical sound". This name did catch on: although the spelling was occasionally used in Britain, the American scientific language used from the start. Most scientists throughout the world used in the 19th century; and it was entrenched in several other European languages, such as French, German, and Dutch. In 1828, an American lexicographer, Noah Webster, entered only the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling gained usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903: it is unknown whether this spelling was introduced by mistake or intentionally; but Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the United States, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; in the next decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling.
The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry also acknowledges this spelling. IUPAC official publications use the spelling as primary, and they list both where it is appropriate.
Production and refinement
The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium metal.
Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. Production of one kilogram of aluminium requires 7 kilograms of oil energy equivalent, as compared to 1.5 kilograms for steel and 2 kilograms for plastic. As of 2019, the world's largest smelters of aluminium are located in China, India, Russia, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of fifty-five percent.
According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita).
Bayer process
Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds:
After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled.
Hall–Héroult process
The conversion of alumina to aluminium metal is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium metal sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing.
Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode.
The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%.
Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible.
Recycling
Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%.
White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases (including, among others, hydrogen, acetylene, and ammonia), which spontaneously ignites on contact with air; contact with damp air results in the release of copious quantities of ammonia gas. Despite these difficulties, the waste is used as a filler in asphalt and concrete.
Applications
Metal
The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons).
Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight. Aluminium, both wrought and cast, has been alloyed with: manganese, silicon, magnesium, copper and zinc among others. For example, the Kynal family of alloys was developed by the British chemical manufacturer Imperial Chemical Industries.
The major uses for aluminium metal are in:
Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density;
Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof;
Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important;
Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion;
A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage;
Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength.
Compounds
The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent.
Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in production of other aluminium compounds. Two kinds of alum, ammonium alum and potassium alum, were formerly used as mordants and in leather tanning, but their use has significantly declined following availability of high-purity aluminium sulfate. Anhydrous aluminium chloride is used as a catalyst in chemical and petrochemical industries, the dyeing industry, and in synthesis of various inorganic and organic compounds. Aluminium hydroxychlorides are used in purifying water, in the paper industry, and as antiperspirants. Sodium aluminate is used in treating water and as an accelerator of solidification of cement.
Many aluminium compounds have niche applications, for example:
Aluminium acetate in solution is used as an astringent.
Aluminium phosphate is used in the manufacture of glass, ceramic, pulp and paper products, cosmetics, paints, varnishes, and in dental cement.
Aluminium hydroxide is used as an antacid, and mordant; it is used also in water purification, the manufacture of glass and ceramics, and in the waterproofing of fabrics.
Lithium aluminium hydride is a powerful reducing agent used in organic chemistry.
Organoaluminiums are used as Lewis acids and co-catalysts.
Methylaluminoxane is a co-catalyst for Ziegler–Natta olefin polymerization to produce vinyl polymers such as polyethene.
Aqueous aluminium ions (such as aqueous aluminium sulfate) are used to treat against fish parasites such as Gyrodactylus salaris.
In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant. Until 2004, most of the adjuvants used in vaccines were aluminium-adjuvanted.
Biology
Despite its widespread occurrence in the Earth's crust, aluminium has no known function in biology. At pH 6–9 (relevant for most natural waters), aluminium precipitates out of water as the hydroxide and is hence not available; most elements behaving this way have no biological role or are toxic. Aluminium sulfate has an LD50 of 6207 mg/kg (oral, mouse), which corresponds to 435 grams (about one pound) for a person.
Toxicity
Aluminium is classified as a non-carcinogen by the United States Department of Health and Human Services. A review published in 1988 said that there was little evidence that normal exposure to aluminium presents a risk to healthy adult, and a 2014 multi-element toxicology review was unable to find deleterious effects of aluminium consumed in amounts not greater than 40 mg/day per kg of body mass. Most aluminium consumed will leave the body in feces; most of the small part of it that enters the bloodstream, will be excreted via urine; nevertheless some aluminium does pass the blood-brain barrier and is lodged preferentially in the brains of Alzheimer's patients. Evidence published in 1989 indicates that, for Alzheimer's patients, aluminium may act by electrostatically crosslinking proteins, thus down-regulating genes in the superior temporal gyrus.
Effects
Aluminium, although rarely, can cause vitamin D-resistant osteomalacia, erythropoietin-resistant microcytic anemia, and central nervous system alterations. People with kidney insufficiency are especially at a risk. Chronic ingestion of hydrated aluminium silicates (for excess gastric acidity control) may result in aluminium binding to intestinal contents and increased elimination of other metals, such as iron or zinc; sufficiently high doses (>50 g/day) can cause anemia.
During the 1988 Camelford water pollution incident people in Camelford had their drinking water contaminated with aluminium sulfate for several weeks. A final report into the incident in 2013 concluded it was unlikely that this had caused long-term health problems.
Aluminium has been suspected of being a possible cause of Alzheimer's disease, but research into this for over 40 years has found, , no good evidence of causal effect.
Aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the laboratory. In very high doses, aluminium is associated with altered function of the blood–brain barrier. A small percentage of people have contact allergies to aluminium and experience itchy red rashes, headache, muscle pain, joint pain, poor memory, insomnia, depression, asthma, irritable bowel syndrome, or other symptoms upon contact with products containing aluminium.
Exposure to powdered aluminium or aluminium welding fumes can cause pulmonary fibrosis. Fine aluminium powder can ignite or explode, posing another workplace hazard.
Exposure routes
Food is the main source of aluminium. Drinking water contains more aluminium than solid food; however, aluminium in food may be absorbed more than aluminium from water. Major sources of human oral exposure to aluminium include food (due to its use in food additives, food and beverage packaging, and cooking utensils), drinking water (due to its use in municipal water treatment), and aluminium-containing medications (particularly antacid/antiulcer and buffered aspirin formulations). Dietary exposure in Europeans averages to 0.2–1.5 mg/kg/week but can be as high as 2.3 mg/kg/week. Higher exposure levels of aluminium are mostly limited to miners, aluminium production workers, and dialysis patients.
Consumption of antacids, antiperspirants, vaccines, and cosmetics provide possible routes of exposure. Consumption of acidic foods or liquids with aluminium enhances aluminium absorption, and maltol has been shown to increase the accumulation of aluminium in nerve and bone tissues.
Treatment
In case of suspected sudden intake of a large amount of aluminium, the only treatment is deferoxamine mesylate which may be given to help eliminate aluminium from the body by chelation. However, this should be applied with caution as this reduces not only aluminium body levels, but also those of other metals such as copper or iron.
Environmental effects
High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at the coal-fired power plants or incinerators. Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time.
Acidic precipitation is the main natural factor to mobilize aluminium from natural sources and the main reason for the environmental effects of aluminium; however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air.
In water, aluminium acts as a toxiс agent on gill-breathing animals such as fish when the water is acidic, in which aluminium may precipitate on gills, which causes loss of plasma- and hemolymph ions leading to osmoregulatory failure. Organic complexes of aluminium may be easily absorbed and interfere with metabolism in mammals and birds, even though this rarely happens in practice.
Aluminium is primary among the factors that reduce plant growth on acidic soils. Although it is generally harmless to plant growth in pH-neutral soils, in acid soils the concentration of toxic Al3+ cations increases and disturbs root growth and function. Wheat has developed a tolerance to aluminium, releasing organic compounds that bind to harmful aluminium cations. Sorghum is believed to have the same tolerance mechanism.
Aluminium production possesses its own challenges to the environment on each step of the production process. The major challenge is the greenhouse gas emissions. These gases result from electrical consumption of the smelters and the byproducts of processing. The most potent of these gases are perfluorocarbons from the smelting process. Released sulfur dioxide is one of the primary precursors of acid rain.
Biodegradation of metallic aluminium is extremely rare; most aluminium-corroding organisms do not directly attack or consume the aluminium, but instead produce corrosive wastes. The fungus Geotrichum candidum can consume the aluminium in compact discs. The bacterium Pseudomonas aeruginosa and the fungus Cladosporium resinae are commonly detected in aircraft fuel tanks that use kerosene-based fuels (not avgas), and laboratory cultures can degrade aluminium.
See also
Aluminium granules
Aluminium joining
Aluminium–air battery
Aluminized steel, for corrosion resistance and other properties
Aluminized screen, for display devices
Aluminized cloth, to reflect heat
Aluminized mylar, to reflect heat
Panel edge staining
Quantum clock
Notes
References
Bibliography
Further reading
Mimi Sheller, Aluminum Dream: The Making of Light Modernity. Cambridge, Mass.: Massachusetts Institute of Technology Press, 2014.
External links
Aluminium at The Periodic Table of Videos (University of Nottingham)
Toxic Substances Portal – Aluminum – from the Agency for Toxic Substances and Disease Registry, United States Department of Health and Human Services
CDC – NIOSH Pocket Guide to Chemical Hazards – Aluminum
World production of primary aluminium, by country
Price history of aluminum, according to the IMF
History of Aluminium – from the website of the International Aluminium Institute
Emedicine – Aluminium
Chemical elements
Post-transition metals
Aluminium
Electrical conductors
Pyrotechnic fuels
Airship technology
Reducing agents
E-number additives
Native element minerals
Chemical elements with face-centered cubic structure
|
https://en.wikipedia.org/wiki/Anarcho-capitalism
|
Anarcho-capitalism (colloquially: ancap or '"an-cap"') is an anti-statist, libertarian political philosophy and economic theory that seeks to abolish centralized states in favor of stateless societies with systems of private property enforced by private agencies, the non-aggression principle, free markets and self-ownership, which extends the concept to include control of private property as part of the self. In the absence of statute, anarcho-capitalists hold that society tends to contractually self-regulate and civilize through participation in the free market, which they describe as a voluntary society involving the voluntary exchange of goods and services. In a theoretical anarcho-capitalist society, the system of private property would still exist and be enforced by private defense agencies and/or insurance companies selected by customers, which would operate competitively in a market and fulfill the roles of courts and the police.
According to its proponents, various historical theorists have espoused philosophies similar to anarcho-capitalism. While the earliest extant attestation of "anarchocapitalism [sic]" is in Karl Hess's essay "The Death of Politics" published by Playboy in March 1969, the person credited with coining the terms anarcho-capitalism and anarcho-capitalist is Murray Rothbard. Rothbard, a leading figure in the 20th-century American libertarian movement, synthesized elements from the Austrian School, classical liberalism and 19th-century American individualist anarchists and mutualists Lysander Spooner and Benjamin Tucker, while rejecting the labor theory of value. Rothbard's anarcho-capitalist society would operate under a mutually agreed-upon "legal code which would be generally accepted, and which the courts would pledge themselves to follow". This legal code would recognize contracts between individuals, private property, self-ownership and tort law in keeping with the non-aggression principle. Rothbard views the power of the state as unjustified, arguing that it restricts individual rights and prosperity, and creates social and economic problems.
Anarcho-capitalists and right-libertarians cite several historical precedents of what they believe to be examples of quasi-anarcho-capitalism, including the Republic of Cospaia, Acadia, Anglo-Saxon England, Medieval Iceland, the American Old West, Gaelic Ireland, and merchant law, admiralty law, and early common law.
Anarcho-capitalism is distinguished from minarchism, which advocates a night-watchman state limited to protecting individuals from aggression and enforcing private property. Unlike most anarchists, anarcho-capitalists support private property and private institutions.
Classification
Anarcho-capitalism developed from radical American anti-state libertarianism and individualist anarchism. A strong current within anarchism does not consider anarcho-capitalism to be part of the anarchist movement because they argue that anarchism has historically been an anti-capitalist movement and for definitional reasons which see anarchism as incompatible with capitalist forms. According to several scholars, Anarcho-capitalism lies outside the tradition of the vast majority of anarchist schools of thought and is more closely affiliated with capitalism, right-libertarianism and neoliberalism. Social anarchists oppose and reject capitalism, and consider "anarcho-capitalism" to be a contradiction in terms, although some, including anarcho-capitalists and right-libertarians, consider anarcho-capitalism to be a form of anarchism.
According to the Encyclopædia Britannica:Anarcho-capitalism is occasionally seen as part of the New Right.
Philosophy
Author J Michael Oliver says that during the 1960s, a philosophical movement arose in the United States that championed "reason, ethical egoism, and free-market capitalism". According to Oliver, anarcho-capitalism is a political theory which logically follows the philosophical conclusions of Objectivism, a philosophical system developed by Russian-American writer Ayn Rand, but Oliver acknowledges that his advocacy of anarcho-capitalism is "quite at odds with Rand's ardent defense of 'limited government. Professor Lisa Duggan also says that Rand's anti-statist, pro–"free market" stances went on to shape the politics of anarcho-capitalism.
According to Patrik Schumacher, the political ideology and programme of Anarcho-capitalism envisages the radicalization of the neoliberal "rollback of the state", and calls for the extension of "entrepreneurial freedom" and "competitive market rationality" to the point where the scope for private enterprise is all-encompassing and "leaves no space for state action whatsoever".
On the state
Anarcho-capitalists oppose the state and seek to privatize any useful service the government presently provides, such as education, infrastructure, or the enforcement of law. They see capitalism and the "free market" as the basis for a free and prosperous society. Murray Rothbard stated that the difference between free-market capitalism and state capitalism is the difference between "peaceful, voluntary exchange" and a "collusive partnership" between business and government that "uses coercion to subvert the free market". Rothbard argued that all government services, including defense, are inefficient because they lack a market-based pricing mechanism regulated by "the voluntary decisions of consumers purchasing services that fulfill their highest-priority needs" and by investors seeking the most profitable enterprises to invest in.
Maverick Edwards of the Liberty University describes anarcho-capitalism as a political, social, and economic theory that places markets as the central "governing body" and where government no longer "grants" rights to its citizenry.
Non-aggression principle
Writer Stanisław Wójtowicz says that although anarcho-capitalists are against centralized states, they hold that all people would naturally share and agree to a specific moral theory based on the non-aggression principle. While the Friedmanian formulation of anarcho-capitalism is robust to the presence of violence and in fact, assumes some degree of violence will occur, anarcho-capitalism as formulated by Rothbard and others holds strongly to the central libertarian nonaggression axiom, sometimes non-aggression principle. Rothbard wrote:
Rothbard's defense of the self-ownership principle stems from what he believed to be his falsification of all other alternatives, namely that either a group of people can own another group of people, or that no single person has full ownership over one's self. Rothbard dismisses these two cases on the basis that they cannot result in a universal ethic, i.e. a just natural law that can govern all people, independent of place and time. The only alternative that remains to Rothbard is self-ownership which he believes is both axiomatic and universal.
In general, the non-aggression axiom is described by Rothbard as a prohibition against the initiation of force, or the threat of force, against persons (in which he includes direct violence, assault and murder) or property (in which he includes fraud, burglary, theft and taxation). The initiation of force is usually referred to as aggression or coercion. The difference between anarcho-capitalists and other libertarians is largely one of the degree to which they take this axiom. Minarchist libertarians such as libertarian political parties would retain the state in some smaller and less invasive form, retaining at the very least public police, courts, and military. However, others might give further allowance for other government programs. In contrast, Rothbard rejects any level of "state intervention", defining the state as a coercive monopoly and as the only entity in human society, excluding acknowledged criminals, that derives its income entirely from coercion, in the form of taxation, which Rothbard describes as "compulsory seizure of the property of the State's inhabitants, or subjects."
Some anarcho-capitalists such as Rothbard accept the non-aggression axiom on an intrinsic moral or natural law basis. It is in terms of the non-aggression principle that Rothbard defined his interpretation of anarchism, "a system which provides no legal sanction for such aggression ['against person and property']"; and wrote that "what anarchism proposes to do, then, is to abolish the State, i.e. to abolish the regularized institution of aggressive coercion". In an interview published in the American libertarian journal The New Banner, Rothbard stated that "capitalism is the fullest expression of anarchism, and anarchism is the fullest expression of capitalism".
Property
Private property
Anarcho-capitalists postulate the privatization of everything, including cities with all their infrastructures, public spaces, streets and urban management systems.
Central to Rothbardian anarcho-capitalism are the concepts of self-ownership and original appropriation that combines personal and private property. Hans-Hermann Hoppe wrote:
Rothbard however rejected the Lockean proviso, and followed the rule of "first come, first served", without any consideration of how much resources are left for other individuals.
Anarcho-capitalists advocate private ownership of the means of production and the allocation of the product of labor created by workers within the context of wage labour and the free market – that is through decisions made by property and capital owners, regardless of what an individual needs or does not need. Original appropriation allows an individual to claim any never-before-used resources, including land and by improving or otherwise using it, own it with the same "absolute right" as their own body, and retaining those rights forever, regardless of whether the resource is still being used by them. According to Rothbard, property can only come about through labor, therefore original appropriation of land is not legitimate by merely claiming it or building a fence around it—it is only by using land and by mixing one's labor with it that original appropriation is legitimized: "Any attempt to claim a new resource that someone does not use would have to be considered invasive of the property right of whoever the first user will turn out to be". Rothbard argued that the resource need not continue to be used in order for it to be the person's property as "for once his labor is mixed with the natural resource, it remains his owned land. His labor has been irretrievably mixed with the land, and the land is therefore his or his assigns' in perpetuity".
Rothbard also spoke about a theory of justice in property rights:
In Justice and Property Rights, Rothbard wrote that "any identifiable owner (the original victim of theft or his heir) must be accorded his property". In the case of slavery, Rothbard claimed that in many cases "the old plantations and the heirs and descendants of the former slaves can be identified, and the reparations can become highly specific indeed". Rothbard believed slaves rightfully own any land they were forced to work on under the homestead principle. If property is held by the state, Rothbard advocated its confiscation and "return to the private sector", writing that "any property in the hands of the State is in the hands of thieves, and should be liberated as quickly as possible". Rothbard proposed that state universities be seized by the students and faculty under the homestead principle. Rothbard also supported the expropriation of nominally "private property" if it is the result of state-initiated force such as businesses that receive grants and subsidies. Rothbard further proposed that businesses who receive at least 50% of their funding from the state be confiscated by the workers, writing: "What we libertarians object to, then, is not government per se but crime, what we object to is unjust or criminal property titles; what we are for is not 'private' property per se but just, innocent, non-criminal private property".
Similarly, Karl Hess wrote that "libertarianism wants to advance principles of property but that it in no way wishes to defend, willy nilly, all property which now is called private ... Much of that property is stolen. Much is of dubious title. All of it is deeply intertwined with an immoral, coercive state system".
By accepting an axiomatic definition of private property and property rights, anarcho-capitalists deny the legitimacy of a state on principle. Hans-Hermann Hoppe argues:
Anarchists view capitalism as an inherently authoritarian and hierarchical system and seek the abolishment of private property. There is disagreement between anarchists and anarcho-capitalists as the former generally rejects anarcho-capitalism as a form of anarchism and considers anarcho-capitalism a contradiction in terms, while the latter holds that the abolishment of private property would require expropriation which is "counterproductive to order" and would require a state.
Common property
As opposed to anarchists, most anarcho-capitalists reject the commons. However, some of them propose that non-state public or community property can also exist in an anarcho-capitalist society. For anarcho-capitalists, what is important is that it is "acquired" and transferred without help or hindrance from what they call the "compulsory state". Deontological anarcho-capitalists believe that the only just and most economically beneficial way to acquire property is through voluntary trade, gift, or labor-based original appropriation, rather than through aggression or fraud.
Anarcho-capitalists state that there could be cases where common property may develop in a Lockean natural rights framework. Anarcho-capitalists make the example of a number of private businesses which may arise in an area, each owning the land and buildings that they use, but they argue that the paths between them become cleared and trodden incrementally through customer and commercial movement. These thoroughfares may become valuable to the community, but according to them ownership cannot be attributed to any single person and original appropriation does not apply because many contributed the labor necessary to create them. In order to prevent it from falling to the "tragedy of the commons", anarcho-capitalists suggest transitioning from common to private property, wherein an individual would make a homesteading claim based on disuse, acquire title by the assent of the community consensus, form a corporation with other involved parties, or other means.
Randall G. Holcombe see challenges stemming from the idea of common property under anarcho-capitalism, such as whether an individual might claim fishing rights in the area of a major shipping lane and thereby forbid passage through it. In contrast, Hoppe's work on anarcho-capitalist theory is based on the assumption that all property is privately held, "including all streets, rivers, airports, and harbors" which forms the foundation of his views on immigration.
Intellectual property
Some anarcho-capitalists strongly oppose intellectual property (i.e., trademarks, patents, copyrights). Stephan N. Kinsella argues that ownership only relates to tangible assets.
Contractual society
The society envisioned by anarcho-capitalists has been labelled by them as a "contractual society" which Rothbard described as "a society based purely on voluntary action, entirely unhampered by violence or threats of violence" The system relies on contracts between individuals as the legal framework which would be enforced by private police and security forces as well as private arbitrations.
Rothbard argues that limited liability for corporations could also exist through contract, arguing that "[c]orporations are not at all monopolistic privileges; they are free associations of individuals pooling their capital. On the purely free market, those men would simply announce to their creditors that their liability is limited to the capital specifically invested in the corporation".
There are limits to the right to contract under some interpretations of anarcho-capitalism. Rothbard believes that the right to contract is based in inalienable rights and because of this any contract that implicitly violates those rights can be voided at will, preventing a person from permanently selling himself or herself into unindentured slavery. That restriction aside, the right to contract under anarcho-capitalist order would be pretty broad. For example, Rothbard went as far as to justify stork markets, arguing that a market in guardianship rights would facilitate the transfer of guardianship from abusive or neglectful parents to those more interested or suited to raising children. Other anarcho-capitalists have also suggested the legalization of organ markets, as in Iran's renal market. Other interpretations conclude that banning such contracts would in itself be an unacceptably invasive interference in the right to contract.
Included in the right of contract is "the right to contract oneself out for employment by others". While anarchists criticize wage labour describing it as wage slavery, anarcho-capitalists view it as a consensual contract. Some anarcho-capitalists prefer to see self-employment prevail over wage labor. David D. Friedman has expressed a preference for a society where "almost everyone is self-employed" and "instead of corporations there are large groups of entrepreneurs related by trade, not authority. Each sells not his time, but what his time produces".
Law and order and the use of violence
Different anarcho-capitalists propose different forms of anarcho-capitalism and one area of disagreement is in the area of law. In The Market for Liberty, Morris and Linda Tannehill object to any statutory law whatsoever. They argue that all one has to do is ask if one is aggressing against another in order to decide if an act is right or wrong. However, while also supporting a on force and fraud, Rothbard supports the establishment of a mutually agreed-upon centralized libertarian legal code which private courts would pledge to follow, as he presumes a high degree of convergence amongst individuals about what constitutes natural justice.
Unlike both the Tannehills and Rothbard who see an ideological commonality of ethics and morality as a requirement, David D. Friedman proposes that "the systems of law will be produced for profit on the open market, just as books and bras are produced today. There could be competition among different brands of law, just as there is competition among different brands of cars". Friedman says whether this would lead to a libertarian society "remains to be proven". He says it is a possibility that very un-libertarian laws may result, such as laws against drugs, but he thinks this would be rare. He reasons that "if the value of a law to its supporters is less than its cost to its victims, that law ... will not survive in an anarcho-capitalist society".
Anarcho-capitalists only accept the collective defense of individual liberty (i.e. courts, military, or police forces) insofar as such groups are formed and paid for on an explicitly voluntary basis. However, their complaint is not just that the state's defensive services are funded by taxation, but that the state assumes it is the only legitimate practitioner of physical force—that is, they believe it forcibly prevents the private sector from providing comprehensive security, such as a police, judicial and prison systems to protect individuals from aggressors. Anarcho-capitalists believe that there is nothing morally superior about the state which would grant it, but not private individuals, a right to use physical force to restrain aggressors. If competition in security provision were allowed to exist, prices would also be lower and services would be better according to anarcho-capitalists. According to Molinari: "Under a regime of liberty, the natural organization of the security industry would not be different from that of other industries". Proponents believe that private systems of justice and defense already exist, naturally forming where the market is allowed to "compensate for the failure of the state", namely private arbitration, security guards, neighborhood watch groups and so on. These private courts and police are sometimes referred to generically as private defense agencies (PDAs). The defense of those unable to pay for such protection might be financed by charitable organizations relying on voluntary donation rather than by state institutions relying on taxation, or by cooperative self-help by groups of individuals. Edward Stringham argues that private adjudication of disputes could enable the market to internalize externalities and provide services that customers desire.
Rothbard stated that the American Revolutionary War and the War of Southern Secession were the only two just wars in American military history. Some anarcho-capitalists such as Rothbard feel that violent revolution is counter-productive and prefer voluntary forms of economic secession to the extent possible. Retributive justice is often a component of the contracts imagined for an anarcho-capitalist society. According to Matthew O'Keefee, some anarcho-capitalists believe prisons or indentured servitude would be justifiable institutions to deal with those who violate anarcho-capitalist property relations while others believe exile or forced restitution are sufficient. Rothbard stressed the importance of restitution as the primary focus of a libertarian legal order and advocated for corporal punishment for petty vandals and the death penalty for murders.
Bruce L. Benson argues that legal codes may impose punitive damages for intentional torts in the interest of deterring crime. Benson gives the example of a thief who breaks into a house by picking a lock. Even if caught before taking anything, Benson argues that the thief would still owe the victim for violating the sanctity of his property rights. Benson opines that despite the lack of objectively measurable losses in such cases, "standardized rules that are generally perceived to be fair by members of the community would, in all likelihood, be established through precedent, allowing judgments to specify payments that are reasonably appropriate for most criminal offenses".
Morris and Linda Tannehill raise a similar example, saying that a bank robber who had an attack of conscience and returned the money would still owe reparations for endangering the employees' and customers' lives and safety, in addition to the costs of the defense agency answering the teller's call for help. However, they believe that the robber's loss of reputation would be even more damaging. They suggest that specialized companies would list aggressors so that anyone wishing to do business with a man could first check his record, provided they trust the veracity of the companies' records. They further theorise that the bank robber would find insurance companies listing him as a very poor risk and other firms would be reluctant to enter into contracts with him.
Influences
Murray Rothbard has listed different ideologies of which his interpretations, he said, have influenced anarcho-capitalism. This includes his interpretation of anarchism, and more precisely individualist anarchism; classical liberalism and the Austrian School of economic thought. Scholars additionally associate anarcho-capitalism with neo-classical liberalism, radical neoliberalism and right-libertarianism.
Anarchism
In both its social and individualist forms, anarchism is usually considered an anti-capitalist and radical left-wing or far-left movement that promotes libertarian socialist economic theories such as collectivism, communism, individualism, mutualism and syndicalism. Because anarchism is usually described alongside libertarian Marxism as the libertarian wing of the socialist movement and as having a historical association with anti-capitalism and socialism, anarchists believe that capitalism is incompatible with social and economic equality and therefore do not recognize anarcho-capitalism as an anarchist school of thought. In particular, anarchists argue that capitalist transactions are not voluntary and that maintaining the class structure of a capitalist society requires coercion which is incompatible with an anarchist society. The usage of libertarian is also in dispute. While both anarchists and anarcho-capitalists have used it, libertarian was synonymous with anarchist until the mid-20th century, when anarcho-capitalist theory developed.
Anarcho-capitalists are distinguished from the dominant anarchist tradition by their relation to property and capital. While both anarchism and anarcho-capitalism share general antipathy towards government authority, anarcho-capitalism favors free-market capitalism. Anarchists, including egoists such as Max Stirner, have supported the protection of an individual's freedom from powers of both government and private property owners. In contrast, while condemning governmental encroachment on personal liberties, anarcho-capitalists support freedoms based on private property rights. Anarcho-capitalist theorist Murray Rothbard argued that protesters should rent a street for protest from its owners. The abolition of public amenities is a common theme in some anarcho-capitalist writings.
As anarcho-capitalism puts laissez-faire economics before economic equality, it is commonly viewed as incompatible with the anti-capitalist and egalitarian tradition of anarchism. Although anarcho-capitalist theory implies the abolition of the state in favour of a fully laissez-faire economy, it lies outside the tradition of anarchism. While using the language of anarchism, anarcho-capitalism only shares anarchism's antipathy towards the state and not anarchism's antipathy towards hierarchy as theorists expect from anarcho-capitalist economic power relations. It follows a different paradigm from anarchism and has a fundamentally different approach and goals. In spite of the anarcho- in its title, anarcho-capitalism is more closely affiliated with capitalism, right-libertarianism, and liberalism than with anarchism. Some within this laissez-faire tradition reject the designation of anarcho-capitalism, believing that capitalism may either refer to the laissez-faire market they support or the government-regulated system that they oppose.
Rothbard argued that anarcho-capitalism is the only true form of anarchism—the only form of anarchism that could possibly exist in reality as he maintained that any other form presupposes authoritarian enforcement of a political ideology such as "redistribution of private property", which he attributed to anarchism. According to this argument, the capitalist free market is "the natural situation" that would result from people being free from state authority and entails the establishment of all voluntary associations in society such as cooperatives, non-profit organizations, businesses and so on. Moreover, anarcho-capitalists, as well as classical liberal minarchists, argue that the application of anarchist ideals as advocated by what they term "left-wing anarchists" would require an authoritarian body of some sort to impose it. Based on their understanding and interpretation of anarchism, in order to forcefully prevent people from accumulating capital, which they believe is a goal of anarchists, there would necessarily be a redistributive organization of some sort which would have the authority to in essence exact a tax and re-allocate the resulting resources to a larger group of people. They conclude that this theoretical body would inherently have political power and would be nothing short of a state. The difference between such an arrangement and an anarcho-capitalist system is what anarcho-capitalists see as the voluntary nature of organization within anarcho-capitalism contrasted with a "centralized ideology" and a "paired enforcement mechanism" which they believe would be necessary under what they describe as a "coercively" egalitarian-anarchist system.
Rothbard also argued that the capitalist system of today is not properly anarchistic because it often colludes with the state. According to Rothbard, "what Marx and later writers have done is to lump together two extremely different and even contradictory concepts and actions under the same portmanteau term. These two contradictory concepts are what I would call 'free-market capitalism' on the one hand, and 'state capitalism' on the other". "The difference between free-market capitalism and state capitalism", writes Rothbard, "is precisely the difference between, on the one hand, peaceful, voluntary exchange, and on the other, violent expropriation". He continues: "State capitalism inevitably creates all sorts of problems which become insoluble".
Traditional anarchists reject the notion of capitalism, hierarchies and private property. Albert Meltzer argued that anarcho-capitalism simply cannot be anarchism because capitalism and the state are inextricably interlinked and because capitalism exhibits domineering hierarchical structures such as that between an employer and an employee. Anna Morgenstern approaches this topic from the opposite perspective, arguing that anarcho-capitalists are not really capitalists because "mass concentration of capital is impossible" without the state. According to Jeremy Jennings, "[i]t is hard not to conclude that these ideas," referring to anarcho-capitalism, have "roots deep in classical liberalism" and "are described as anarchist only on the basis of a misunderstanding of what anarchism is." For Jennings, "anarchism does not stand for the untrammelled freedom of the individual (as the 'anarcho-capitalists' appear to believe) but, as we have already seen, for the extension of individuality and community." Similarly, Barbara Goodwin, Emeritus Professor of Politics at the University of East Anglia, Norwich, argues that anarcho-capitalism's "true place is in the group of right-wing libertarians", not in anarchism.
Some right-libertarian scholars like Michael Huemer, who identify with the ideology, describe anarcho-capitalism as a "variety of anarchism". British author Andrew Heywood also believes that "individualist anarchism overlaps with libertarianism and is usually linked to a strong belief in the market as a self-regulating mechanism, most obviously manifest in the form of anarcho-capitalism". Frank H. Brooks, author of The Individualist Anarchists: An Anthology of Liberty (1881–1908), believes that "anarchism has always included a significant strain of radical individualism, from the hyperrationalism of Godwin, to the egoism of Stirner, to the libertarians and anarcho-capitalists of today".
While both anarchism and anarcho-capitalism are in opposition to the state, it is a necessary but not sufficient condition because anarchists and anarcho-capitalists interpret state-rejection differently. Austrian school economist David Prychitko, in the context of anarcho-capitalism says that "while society without a state is necessary for full-fledged anarchy, it is nevertheless insufficient". According to Ruth Kinna, anarcho-capitalists are anti-statists who draw more on right-wing liberal theory and the Austrian School than anarchist traditions. Kinna writes that "[i]n order to highlight the clear distinction between the two positions", anarchists describe anarcho-capitalists as "propertarians". Anarcho-capitalism is usually seen as part of the New Right.
Some anarcho-capitalists argue that, according to them, anarchists consider the word "anarchy" as to be the antithesis of hierarchy, and therefore, that "anarcho-capitalism" is sometimes considered to be a term with differences philosophically to what they personally consider to be true anarchism, as an anarcho-capitalist society would inherently contain hierarchy. Additionally, Rothbard discusses the difference between "government" and "governance" thus, proponents of anarcho-capitalism think the philosophy's common name is indeed consistent, as it promotes private governance, but is vehemently anti-government.
Classical liberalism
Historian and libertarian Ralph Raico argued that what liberal philosophers "had come up with was a form of individualist anarchism, or, as it would be called today, anarcho-capitalism or market anarchism". He also said that Gustave de Molinari was proposing a doctrine of the private production of security, a position which was later taken up by Murray Rothbard. Some anarcho-capitalists consider Molinari to be the first proponent of anarcho-capitalism. In the preface to the 1977 English translation by Murray Rothbard called The Production of Security the "first presentation anywhere in human history of what is now called anarcho-capitalism", although admitting that "Molinari did not use the terminology, and probably would have balked at the name". Hans-Hermann Hoppe said that "the 1849 article 'The Production of Security' is probably the single most important contribution to the modern theory of anarcho-capitalism". According to Hans-Hermann Hoppe, one of the 19th century precursors of anarcho-capitalism were philosopher Herbert Spencer, classical liberal Auberon Herbert and liberal socialist Franz Oppenheimer.
Ruth Kinna credits Murray Rothbard with coining the term anarcho-capitalism, which is – Kinna proposes – to describe "a commitment to unregulated private property and laissez-faire economics, prioritizing the liberty-rights of individuals, unfettered by government regulation, to accumulate, consume and determine the patterns of their lives as they see fit". According to Kinna, anarcho-capitalists "will sometimes label themselves market anarchists because they recognize the negative connotations of 'capitalism'. But the literature of anarcho-capitalism draws on classical liberal theory, particularly the Austrian School – Friedrich von Hayek and Ludwig von Mises – rather than recognizable anarchist traditions. Ayn Rand's laissez-faire, anti-government, corporate philosophy – Objectivism – is sometimes associated with anarcho-capitalism". Other scholars similarly associate anarcho-capitalism with anti-state classical liberalism, neo-classical liberalism, radical neoliberalism and right-libertarianism.
Paul Dragos Aligica writes that there is a "foundational difference between the classical liberal and the anarcho-capitalist positions". Classical liberalism, while accepting critical arguments against collectivism, acknowledges a certain level of public ownership and collective governance as necessary to provide practical solutions to political problems. In contrast anarcho-capitalism, according to Aligica, denies any requirement for any form of public administration, and allows no meaningful role for the public sphere, which is seen as sub-optimal and illegitimate.
Individualist anarchism
Murray Rothbard, a student of Ludwig von Mises, stated that he was influenced by the work of the 19th-century American individualist anarchists. In the winter of 1949, Rothbard decided to reject minimal state laissez-faire and embrace his interpretation of individualist anarchism. In 1965, Rothbard wrote that "Lysander Spooner and Benjamin R. Tucker were unsurpassed as political philosophers and nothing is more needed today than a revival and development of the largely forgotten legacy they left to political philosophy". However, Rothbard thought that they had a faulty understanding of economics as the 19th-century individualist anarchists had a labor theory of value as influenced by the classical economists, while Rothbard was a student of Austrian School economics which does not agree with the labor theory of value. Rothbard sought to meld 19th-century American individualist anarchists' advocacy of economic individualism and free markets with the principles of Austrian School economics, arguing that "[t]here is, in the body of thought known as 'Austrian economics', a scientific explanation of the workings of the free market (and of the consequences of government intervention in that market) which individualist anarchists could easily incorporate into their political and social Weltanschauung". Rothbard held that the economic consequences of the political system they advocate would not result in an economy with people being paid in proportion to labor amounts, nor would profit and interest disappear as they expected. Tucker thought that unregulated banking and money issuance would cause increases in the money supply so that interest rates would drop to zero or near to it. Peter Marshall states that "anarcho-capitalism overlooks the egalitarian implications of traditional individualist anarchists like Spooner and Tucker". Stephanie Silberstein states that "While Spooner was no free-market capitalist, nor an anarcho-capitalist, he was not as opposed to capitalism as most socialists were."
In "The Spooner-Tucker Doctrine: An Economist's View", Rothbard explained his disagreements. Rothbard disagreed with Tucker that it would cause the money supply to increase because he believed that the money supply in a free market would be self-regulating. If it were not, then Rothbard argued inflation would occur so it is not necessarily desirable to increase the money supply in the first place. Rothbard claimed that Tucker was wrong to think that interest would disappear regardless because he believed people, in general, do not wish to lend their money to others without compensation, so there is no reason why this would change just because banking was unregulated. Tucker held a labor theory of value and thought that in a free market people would be paid in proportion to how much labor they exerted and that exploitation or usury was taking place if they were not. As Tucker explained in State Socialism and Anarchism, his theory was that unregulated banking would cause more money to be available and that this would allow the proliferation of new businesses which would, in turn, raise demand for labor. This led Tucker to believe that the labor theory of value would be vindicated and equal amounts of labor would receive equal pay. As an Austrian School economist, Rothbard did not agree with the labor theory and believed that prices of goods and services are proportional to marginal utility rather than to labor amounts in the free market. As opposed to Tucker he did not think that there was anything exploitative about people receiving an income according to how much "buyers of their services value their labor" or what that labor produces.
Without the labor theory of value, some argue that 19th-century individualist anarchists approximate the modern movement of anarcho-capitalism, although this has been contested or rejected. As economic theory changed, the popularity of the labor theory of classical economics was superseded by the subjective theory of value of neoclassical economics and Rothbard combined Mises' Austrian School of economics with the absolutist views of human rights and rejection of the state he had absorbed from studying the individualist American anarchists of the 19th century such as Tucker and Spooner. In the mid-1950s, Rothbard wrote an unpublished article named "Are Libertarians 'Anarchists'?" under the pseudonym "Aubrey Herbert", concerned with differentiating himself from communist and socialistic economic views of anarchists, including the individualist anarchists of the 19th century, concluding that "we are not anarchists and that those who call us anarchists are not on firm etymological ground and are being completely unhistorical. On the other hand, it is clear that we are not archists either: we do not believe in establishing a tyrannical central authority that will coerce the noninvasive as well as the invasive. Perhaps, then, we could call ourselves by a new name: nonarchist." Joe Peacott, an American individualist anarchist in the mutualist tradition, criticizes anarcho-capitalists for trying to hegemonize the individualist anarchism label and make appear as if all individualist anarchists are in favor of capitalism. Peacott states that "individualists, both past and present, agree with the communist anarchists that present-day capitalism is based on economic coercion, not on voluntary contract. Rent and interest are the mainstays of modern capitalism and are protected and enforced by the state. Without these two unjust institutions, capitalism could not exist".
Anarchist activists and scholars do not consider anarcho-capitalism as a part of the anarchist movement, arguing that anarchism has historically been an anti-capitalist movement and see it as incompatible with capitalist forms. Although some regard anarcho-capitalism as a form of individualist anarchism, many others disagree or contest the existence of an individualist–socialist divide. In coming to terms that anarchists mostly identified with socialism, Rothbard wrote that individualist anarchism is different from anarcho-capitalism and other capitalist theories due to the individualist anarchists retaining the labor theory of value and socialist doctrines. Similarly, many writers deny that anarcho-capitalism is a form of anarchism or that capitalism is compatible with anarchism.
The Palgrave Handbook of Anarchism writes that "[a]s Benjamin Franks rightly points out, individualisms that defend or reinforce hierarchical forms such as the economic-power relations of anarcho-capitalism are incompatible with practices of social anarchism based on developing immanent goods which contest such as inequalities". Laurence Davis cautiously asks "[I]s anarcho-capitalism really a form of anarchism or instead a wholly different ideological paradigm whose adherents have attempted to expropriate the language of anarchism for their own anti-anarchist ends?" Davis cites Iain McKay, "whom Franks cites as an authority to support his contention that 'academic analysis has followed activist currents in rejecting the view that anarcho-capitalism has anything to do with social anarchism, as arguing "quite emphatically on the very pages cited by Franks that anarcho-capitalism is by no means a type of anarchism". McKay writes that "[i]t is important to stress that anarchist opposition to the so-called capitalist 'anarchists' does not reflect some kind of debate within anarchism, as many of these types like to pretend, but a debate between anarchism and its old enemy capitalism. ... Equally, given that anarchists and 'anarcho'-capitalists have fundamentally different analyses and goals it is hardly 'sectarian' to point this out".
Davis writes that "Franks asserts without supporting evidence that most major forms of individualist anarchism have been largely anarcho-capitalist in content, and concludes from this premise that most forms of individualism are incompatible with anarchism". Davis argues that "the conclusion is unsustainable because the premise is false, depending as it does for any validity it might have on the further assumption that anarcho-capitalism is indeed a form of anarchism. If we reject this view, then we must also reject the individual anarchist versus the communal anarchist 'chasm' style of argument that follows from it". Davis maintains that "the ideological core of anarchism is the belief that society can and should be organised without hierarchy and domination. Historically, anarchists have struggles against a wide range of regimes of domination, from capitalism, the state system, patriarchy, heterosexism, and the domination of nature to colonialism, the war system, slavery, fascism, white supremacy, and certain forms of organised religion". According to Davis, "[w]hile these visions range from the predominantly individualistic to the predominantly communitarian, features common to virtually all include an emphasis on self-management and self-regulatory methods of organisation, voluntary association, decentralised society, based on the principle of free association, in which people will manage and govern themselves". Finally, Davis includes a footnote stating that "[i]ndividualist anarchism may plausibly be re regarded as a form of both socialism and anarchism. Whether the individualist anarchists were consistent anarchists (and socialists) is another question entirely. ... McKay comments as follows: 'any individualist anarchism which supports wage labour is inconsistent anarchism. It can easily be made consistent anarchism by applying its own principles consistently. In contrast 'anarcho'-capitalism rejects so many of the basic, underlying, principles of anarchism ... that it cannot be made consistent with the ideals of anarchism.
Historical precedents
Several anarcho-capitalists and right-libertarians have discussed historical precedents of what they believe were examples of anarcho-capitalism.
Free cities of medieval Europe
Economist and libertarian scholar Bryan Caplan considers the free cities of medieval Europe as examples of "anarchist" or "nearly anarchistic" societies, further arguing:
Medieval Iceland
According to the libertarian theorist David D. Friedman, "[m]edieval Icelandic institutions have several peculiar and interesting characteristics; they might almost have been invented by a mad economist to test the lengths to which market systems could supplant government in its most fundamental functions". While not directly labeling it anarcho-capitalist, Friedman argues that the legal system of the Icelandic Commonwealth comes close to being a real-world anarcho-capitalist legal system. Although noting that there was a single legal system, Friedman argues that enforcement of the law was entirely private and highly capitalist, providing some evidence of how such a society would function. Friedman further wrote that "[e]ven where the Icelandic legal system recognized an essentially 'public' offense, it dealt with it by giving some individual (in some cases chosen by lot from those affected) the right to pursue the case and collect the resulting fine, thus fitting it into an essentially private system".
Friedman and Bruce L. Benson argued that the Icelandic Commonwealth saw significant economic and social progress in the absence of systems of criminal law, an executive, or bureaucracy. This commonwealth was led by chieftains, whose position could be bought and sold like that of private property. Being a member of the chieftainship was also completely voluntary.
American Old West
According to Terry L. Anderson and P. J. Hill, the Old West in the United States in the period of 1830 to 1900 was similar to anarcho-capitalism in that "private agencies provided the necessary basis for an orderly society in which property was protected and conflicts were resolved" and that the common popular perception that the Old West was chaotic with little respect for property rights is incorrect. Since squatters had no claim to western lands under federal law, extra-legal organizations formed to fill the void. Benson explains:
According to Anderson, "[d]efining anarcho-capitalist to mean minimal government with property rights developed from the bottom up, the western frontier was anarcho-capitalistic. People on the frontier invented institutions that fit the resource constraints they faced".
Gaelic Ireland
In his work For a New Liberty, Murray Rothbard has claimed ancient Gaelic Ireland as an example of nearly anarcho-capitalist society. In his depiction, citing the work of Professor Joseph Peden, the basic political unit of ancient Ireland was the tuath, which is portrayed as "a body of persons voluntarily united for socially beneficial purposes" with its territorial claim being limited to "the sum total of the landed properties of its members". Civil disputes were settled by private arbiters called "brehons" and the compensation to be paid to the wronged party was insured through voluntary surety relationships. Commenting on the "kings" of tuaths, Rothbard stated:
Law merchant, admiralty law, and early common law
Some libertarians have cited law merchant, admiralty law and early common law as examples of anarcho-capitalism.
In his work Power and Market, Rothbard stated:
Somalia from 1991 to 2006
Economist Alex Tabarrok argued that Somalia in its stateless period provided a "unique test of the theory of anarchy", in some aspects near of that espoused by anarcho-capitalists David D. Friedman and Murray Rothbard. Nonetheless, both anarchists and some anarcho-capitalists argue that Somalia was not an anarchist society.
Analysis and criticism
State, justice and defense
Anarchists such as Brian Morris argue that anarcho-capitalism does not in fact get rid of the state. He says that anarcho-capitalists "simply replaced the state with private security firms, and can hardly be described as anarchists as the term is normally understood". In "Libertarianism: Bogus Anarchy", anarchist Peter Sabatini notes:
Similarly, Bob Black argues that an anarcho-capitalist wants to "abolish the state to his own satisfaction by calling it something else". He states that they do not denounce what the state does, they just "object to who's doing it".
Paul Birch argues that legal disputes involving several jurisdictions and different legal systems will be too complex and costly. He therefore argues that anarcho-capitalism is inherently unstable, and would evolve, entirely through the operation of free market forces, into either a single dominant private court with a natural monopoly of justice over the territory (a de facto state), a society of multiple city states, each with a territorial monopoly, or a 'pure anarchy' that would rapidly descend into chaos.
Randall G. Holcombe argues that anarcho-capitalism turns justice into a commodity as private defense and court firms would favour those who pay more for their services. He argues that defense agencies could form cartels and oppress people without fear of competition. Philosopher Albert Meltzer argued that since anarcho-capitalism promotes the idea of private armies, it actually supports a "limited State". He contends that it "is only possible to conceive of Anarchism which is free, communistic and offering no economic necessity for repression of countering it".
Libertarian Robert Nozick argues that a competitive legal system would evolve toward a monopoly government—even without violating individuals' rights in the process. In Anarchy, State, and Utopia, Nozick defends minarchism and argues that an anarcho-capitalist society would inevitably transform into a minarchist state through the eventual emergence of a monopolistic private defense and judicial agency that no longer faces competition. He argues that anarcho-capitalism results in an unstable system that would not endure in the real world. While anarcho-capitalists such as Roy Childs and Murray Rothbard have rejected Nozick's arguments, with Rothbard arguing that the process described by Nozick, with the dominant protection agency outlawing its competitors, in fact violates its own clients' rights, John Jefferson actually advocates Nozick's argument and states that such events would best operate in laissez-faire. Robert Ellickson presented a Hayekian case against anarcho-capitalism, calling it a "pipe-dream" and stating that anarcho-capitalists "by imagining a stable system of competing private associations, ignore both the inevitability of territorial monopolists in governance, and the importance of institutions to constrain those monopolists' abuses".
Some libertarians argue that anarcho-capitalism would result in different standards of justice and law due to relying too much on the market. Friedman responded to this criticism by arguing that it assumes the state is controlled by a majority group that has similar legal ideals. If the populace is diverse, different legal standards would therefore be appropriate.
Rights and freedom
Negative and positive rights are rights that oblige either action (positive rights) or inaction (negative rights). Anarcho-capitalists believe that negative rights should be recognized as legitimate, but positive rights should be rejected as an intrusion. Some critics reject the distinction between positive and negative rights. Peter Marshall also states that the anarcho-capitalist definition of freedom is entirely negative and that it cannot guarantee the positive freedom of individual autonomy and independence.
About anarcho-capitalism, anarcho-syndicalist and anti-capitalist intellectual Noam Chomsky says:
Economics and property
Social anarchists argue that anarcho-capitalism allows individuals to accumulate significant power through free markets and private property. Friedman responded by arguing that the Icelandic Commonwealth was able to prevent the wealthy from abusing the poor by requiring individuals who engaged in acts of violence to compensate their victims financially.
Anarchists argue that certain capitalist transactions are not voluntary and that maintaining the class structure of a capitalist society requires coercion which violates anarchist principles. Anthropologist David Graeber noted his skepticism about anarcho-capitalism along the same lines, arguing:
Some critics argue that the anarcho-capitalist concept of voluntary choice ignores constraints due to both human and non-human factors such as the need for food and shelter as well as active restriction of both used and unused resources by those enforcing property claims. If a person requires employment in order to feed and house himself, the employer-employee relationship could be considered involuntary. Another criticism is that employment is involuntary because the economic system that makes it necessary for some individuals to serve others is supported by the enforcement of coercive private property relations. Some philosophies view any ownership claims on land and natural resources as immoral and illegitimate. Objectivist philosopher Harry Binswanger criticizes anarcho-capitalism by arguing that "capitalism requires government", questioning who or what would enforce treaties and contracts.
Some right-libertarian critics of anarcho-capitalism who support the full privatization of capital such as geolibertarians argue that land and the raw materials of nature remain a distinct factor of production and cannot be justly converted to private property because they are not products of human labor. Some socialists, including market anarchists and mutualists, adamantly oppose absentee ownership. Anarcho-capitalists have strong abandonment criteria, namely that one maintains ownership until one agrees to trade or gift it. Anti-state critics of this view posit comparatively weak abandonment criteria, arguing that one loses ownership when one stops personally occupying and using it as well as the idea of perpetually binding original appropriation is anathema to traditional schools of anarchism.
Literature
The following is a partial list of notable nonfiction works discussing anarcho-capitalism.
Bruce L. Benson, The Enterprise of Law: Justice Without The State
To Serve and Protect: Privatization and Community in Criminal Justice
David D. Friedman, The Machinery of Freedom
Edward P. Stringham, Anarchy and the Law: The Political Economy of Choice
George H. Smith, "Justice Entrepreneurship in a Free Market"
Gerard Casey, Libertarian Anarchy: Against the State
Hans-Hermann Hoppe, Anarcho-Capitalism: An Annotated Bibliography
A Theory of Socialism and Capitalism
Democracy: The God That Failed
The Economics and Ethics of Private Property
Linda and Morris Tannehill, The Market for Liberty
Michael Huemer, The Problem of Political Authority
Murray Rothbard, founder of anarcho-capitalism:
For a New Liberty
Man, Economy, and State
Power and Market
The Ethics of Liberty
See also
Agorism
Anarcho-capitalism and minarchism
Consequentialist libertarianism
Counter-economics
Creative disruption
Crypto-anarchism
Definition of anarchism and libertarianism
Issues in anarchism
Left-wing market anarchism
Natural-rights libertarianism
Privatization in criminal justice
Propertarianism
Stateless society
The Libertarian Forum
Voluntaryism
Notes
References
Further reading
Brown, Susan Love (1997). "The Free Market as Salvation from Government: The Anarcho-Capitalist View". In Carrier, James G., ed. Meanings of the Market: The Free Market in Western Culture (illustrated ed.). Oxford: Berg Publishers. p. 99. .
External links
Anarcho-capitalist FAQ
LewRockwell.com – website run by Lew Rockwell
Mises Institute – research and educational center of classical liberalism, including anarcho-capitalism, Austrian School of economics and American libertarian political theory
Property and Freedom Society – international anarcho-capitalist society
Strike The Root – an anarcho-capitalist website featuring essays, news, and a forum
Austrian School
Capitalist systems
Economic ideologies
Anarcho-capitalism
Ideologies of capitalism
Classical liberalism
Libertarianism by form
Political ideologies
Right-libertarianism
Syncretic political movements
Murray Rothbard
|
https://en.wikipedia.org/wiki/Almond
|
The almond (Prunus amygdalus, syn. Prunus dulcis) is a species of small tree from the genus Prunus, cultivated worldwide for its seed, a culinary nut. Along with the peach, it is classified in the subgenus Amygdalus, distinguished from the other subgenera by corrugations on the shell (endocarp) surrounding the seed.
The fruit of the almond is a drupe, consisting of an outer hull and a hard shell with the seed, which is not a true nut. Shelling almonds refers to removing the shell to reveal the seed. Almonds are sold shelled or unshelled. Blanched almonds are shelled almonds that have been treated with hot water to soften the seedcoat, which is then removed to reveal the white embryo. Once almonds are cleaned and processed, they can be stored over time. Almonds are used in many cuisines, often featuring prominently in desserts, such as marzipan.
The almond tree prospers in a moderate Mediterranean climate with cool winter weather. Native to Iran and surrounding countries including the Levant, today it is rarely found wild in its original setting. Almonds were one of the earliest domesticated fruit trees, due to the ability to produce quality offspring entirely from seed, without using suckers and cuttings. Evidence of domesticated almonds in the Early Bronze Age has been found in the archeological sites of the Middle East, and subsequently across the Mediterranean region and similar arid climates with cool winters.
California produces over half of the world's almond supply. Due to high acreage and water demand for almond cultivation, and need for pesticides, California almond production may be unsustainable, especially during the persistent drought and heat from climate change in the 21st century. Droughts in California have caused some producers to leave the industry, leading to lower supply and increased prices.
Description
The almond is a deciduous tree growing to in height, with a trunk of up to in diameter. The young twigs are green at first, becoming purplish where exposed to sunlight, then gray in their second year. The leaves are long, with a serrated margin and a petiole.
The flowers are white to pale pink, diameter with five petals, produced singly or in pairs and appearing before the leaves in early spring. Almond grows best in Mediterranean climates with warm, dry summers and mild, wet winters. The optimal temperature for their growth is between and the tree buds have a chilling requirement of 200 to 700 hours below to break dormancy.
Almonds begin bearing an economic crop in the third year after planting. Trees reach full bearing five to six years after planting. The fruit matures in the autumn, 7–8 months after flowering.
The almond fruit is long. It is not a nut but a drupe. The outer covering, consisting of an outer exocarp, or skin, and mesocarp, or flesh, fleshy in other members of Prunus such as the plum and cherry, is instead a thick, leathery, gray-green coat (with a downy exterior), called the hull. Inside the hull is a woody endocarp which forms a reticulated, hard shell (like the outside of a peach pit) called the pyrena. Inside the shell is the edible seed, commonly called a nut. Generally, one seed is present, but occasionally two occur. After the fruit matures, the hull splits and separates from the shell, and an abscission layer forms between the stem and the fruit so that the fruit can fall from the tree.
Taxonomy
Sweet and bitter almonds
The seeds of Prunus dulcis var. dulcis are predominantly sweet but some individual trees produce seeds that are somewhat more bitter. The genetic basis for bitterness involves a single gene, the bitter flavor furthermore being recessive, both aspects making this trait easier to domesticate. The fruits from Prunus dulcis var. amara are always bitter, as are the kernels from other species of genus Prunus, such as apricot, peach and cherry (although to a lesser extent).
The bitter almond is slightly broader and shorter than the sweet almond and contains about 50% of the fixed oil that occurs in sweet almonds. It also contains the enzyme emulsin which, in the presence of water, acts on the two soluble glucosides amygdalin and prunasin yielding glucose, cyanide and the essential oil of bitter almonds, which is nearly pure benzaldehyde, the chemical causing the bitter flavor. Bitter almonds may yield 4–9 milligrams of hydrogen cyanide per almond and contain 42 times higher amounts of cyanide than the trace levels found in sweet almonds. The origin of cyanide content in bitter almonds is via the enzymatic hydrolysis of amygdalin. P450 monooxygenases are involved in the amygdalin biosynthetic pathway. A point mutation in a bHLH transcription factor prevents transcription of the two cytochrome P450 genes, resulting in the sweet kernel trait.
Etymology
The word almond comes from Old French or , Late Latin , , derived from from the Ancient Greek () (cf. amygdala, an almond-shaped portion of the brain). Late Old English had amygdales, "almonds".
The adjective amygdaloid (literally 'like an almond') is used to describe objects which are roughly almond-shaped, particularly a shape which is part way between a triangle and an ellipse. For example, the amygdala of the brain uses a direct borrowing of the Greek term .
Distribution and habitat
Almond is native to Iran and its surrounding regions, including the Levant area. It was spread by humans in ancient times along the shores of the Mediterranean into northern Africa and southern Europe, and more recently transported to other parts of the world, notably California, United States. The wild form of domesticated almond grows in parts of the Levant.
Selection of the sweet type from the many bitter types in the wild marked the beginning of almond domestication. It is unclear as to which wild ancestor of the almond created the domesticated species. The species Prunus fenzliana may be the most likely wild ancestor of the almond, in part because it is native to Armenia and western Azerbaijan, where it was apparently domesticated. Wild almond species were grown by early farmers, "at first unintentionally in the garbage heaps, and later intentionally in their orchards".
Cultivation
Almonds were one of the earliest domesticated fruit trees, due to "the ability of the grower to raise attractive almonds from seed. Thus, in spite of the fact that this plant does not lend itself to propagation from suckers or from cuttings, it could have been domesticated even before the introduction of grafting". Domesticated almonds appear in the Early Bronze Age (3000–2000 BC), such as the archaeological sites of Numeira (Jordan), or possibly earlier. Another well-known archaeological example of the almond is the fruit found in Tutankhamun's tomb in Egypt (c. 1325 BC), probably imported from the Levant. An article on almond tree cultivation in Spain is brought down in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture.
Of the European countries that the Royal Botanic Garden Edinburgh reported as cultivating almonds, Germany is the northernmost, though the domesticated form can be found as far north as Iceland.
Varieties
Almond trees are small to medium sized but commercial cultivars can be grafted onto a different root-stock to produce smaller trees. Varieties include:
– originates in the 1800s. A large tree that produces large, smooth, thin-shelled almonds with 60–65% edible kernel per nut. Requires pollination from other almond varieties for good nut production.
– originates in Italy. Has thicker, hairier shells with only 32% of edible kernel per nut. The thicker shell gives some protection from pests such as the navel orangeworm. Does not require pollination by other almond varieties.
Mariana – used as a rootstock to result in smaller trees
Breeding
Breeding programmes have found the high shell-seal trait.
Pollination
The most widely planted varieties of almond are self-incompatible; hence these trees require pollen from a tree with different genetic characters to produce seeds. Almond orchards therefore must grow mixtures of almond varieties. In addition, the pollen is transferred from flower to flower by insects; therefore commercial growers must ensure there are enough insects to perform this task. The large scale of almond production in the U.S. creates a significant problem of providing enough pollinating insects. Additional pollinating insects are therefore brought to the trees. The pollination of California's almonds is the largest annual managed pollination event in the world, with 1.4 million hives (nearly half of all beehives in the US) being brought to the almond orchards each February.
Much of the supply of bees is managed by pollination brokers, who contract with migratory beekeepers from at least 49 states for the event. This business was heavily affected by colony collapse disorder at the turn of the 21st century, causing a nationwide shortage of honey bees and increasing the price of insect pollination. To partially protect almond growers from these costs, researchers at the Agricultural Research Service, part of the United States Department of Agriculture (USDA), developed self-pollinating almond trees that combine this character with quality characters such as a flavor and yield. Self-pollinating almond varieties exist, but they lack some commercial characters. However, through natural hybridisation between different almond varieties, a new variety that was self-pollinating with a high yield of commercial quality nuts was produced.
Diseases
Almond trees can be attacked by an array of damaging microbes, fungal pathogens, plant viruses, and bacteria.
Pests
Pavement ants (Tetramorium caespitum), southern fire ants (Solenopsis xyloni), and thief ants (Solenopsis molesta) are seed predators. Bryobia rubrioculus mites are most known for their damage to this crop.
Sustainability
Almond production in California is concentrated mainly in the Central Valley, where the mild climate, rich soil, abundant sunshine and water supply make for ideal growing conditions. Due to the persistent droughts in California in the early 21st century, it became more difficult to raise almonds in a sustainable manner. The issue is complex because of the high amount of water needed to produce almonds: a single almond requires roughly of water to grow properly. Regulations related to water supplies are changing so some growers have destroyed their current almond orchards to replace with either younger trees or a different crop such as pistachio that needs less water.
Sustainability strategies implemented by the Almond Board of California and almond farmers include:
tree and soil health, and other farming practices
minimizing dust production during the harvest
bee health
irrigation guidelines for farmers
food safety
use of waste biomass as coproducts with a goal to achieve zero waste
use of solar energy during processing
job development
support of scientific research to investigate potential health benefits of consuming almonds
international education about sustainability practices
Production
In 2020, world production of almonds was 4.1 million tonnes, led by the United States providing 57% of the world total (table). Other leading producers were Spain, Australia, and Iran.
United States
In the United States, production is concentrated in California where and six different almond varieties were under cultivation in 2017, with a yield of of shelled almonds. California production is marked by a period of intense pollination during late winter by rented commercial bees transported by truck across the U.S. to almond groves, requiring more than half of the total U.S. commercial honeybee population. The value of total U.S. exports of shelled almonds in 2016 was $3.2 billion.
All commercially grown almonds sold as food in the U.S. are sweet cultivars. The U.S. Food and Drug Administration reported in 2010 that some fractions of imported sweet almonds were contaminated with bitter almonds, which contain cyanide.
Spain
Spain has diverse commercial cultivars of almonds grown in Catalonia, Valencia, Murcia, Andalusia, and Aragón regions, and the Balearic Islands. Production in 2016 declined 2% nationally compared to 2015 production data.
The 'Marcona' almond cultivar is recognizably different from other almonds and is marketed by name. The kernel is short, round, relatively sweet, and delicate in texture. Its origin is unknown and has been grown in Spain for a long time; the tree is very productive, and the shell of the nut is very hard.
Australia
Australia is the largest almond production region in the Southern Hemisphere. Most of the almond orchards are located along the Murray River corridor in New South Wales, Victoria, and South Australia.
Toxicity
Bitter almonds contain 42 times higher amounts of cyanide than the trace levels found in sweet almonds. Extract of bitter almond was once used medicinally but even in small doses, effects are severe or lethal, especially in children; the cyanide must be removed before consumption. The acute oral lethal dose of cyanide for adult humans is reported to be of body weight (approximately 50 bitter almonds), so that for children consuming 5–10 bitter almonds may be fatal. Symptoms of eating such almonds include vertigo and other typical cyanide poisoning effects.
Almonds may cause allergy or intolerance. Cross-reactivity is common with peach allergens (lipid transfer proteins) and tree nut allergens. Symptoms range from local signs and symptoms (e.g., oral allergy syndrome, contact urticaria) to systemic signs and symptoms including anaphylaxis (e.g., urticaria, angioedema, gastrointestinal and respiratory symptoms).
Almonds are susceptible to aflatoxin-producing molds. Aflatoxins are potent carcinogenic chemicals produced by molds such as Aspergillus flavus and Aspergillus parasiticus. The mold contamination may occur from soil, previously infested almonds, and almond pests such as navel-orange worm. High levels of mold growth typically appear as gray to black filament-like growth. It is unsafe to eat mold-infected tree nuts.
Some countries have strict limits on allowable levels of aflatoxin contamination of almonds and require adequate testing before the nuts can be marketed to their citizens. The European Union, for example, introduced a requirement since 2007 that all almond shipments to the EU be tested for aflatoxin. If aflatoxin does not meet the strict safety regulations, the entire consignment may be reprocessed to eliminate the aflatoxin or it must be destroyed.
Breeding programs have found the trait. High shell-seal provides resistance against these Aspergillus species and so against the development of their toxins.
Mandatory pasteurization in California
After tracing cases of salmonellosis to almonds, the USDA approved a proposal by the Almond Board of California to pasteurize almonds sold to the public. After publishing the rule in March 2007, the almond pasteurization program became mandatory for California companies effective 1 September 2007. Raw, untreated California almonds have not been available in the U.S. since then.
California almonds labeled "raw" must be steam-pasteurized or chemically treated with propylene oxide (PPO). This does not apply to imported almonds or almonds sold from the grower directly to the consumer in small quantities. The treatment also is not required for raw almonds sold for export outside of North America.
The Almond Board of California states: "PPO residue dissipates after treatment". The U.S. Environmental Protection Agency has reported: "Propylene oxide has been detected in fumigated food products; consumption of contaminated food is another possible route of exposure". PPO is classified as Group 2B ("possibly carcinogenic to humans").
The USDA-approved marketing order was challenged in court by organic farmers organized by the Cornucopia Institute, a Wisconsin-based farm policy research group which filed a lawsuit in September 2008. According to the institute, this almond marketing order has imposed significant financial burdens on small-scale and organic growers and damaged domestic almond markets. A federal judge dismissed the lawsuit in early 2009 on procedural grounds. In August 2010, a federal appeals court ruled that the farmers have a right to appeal the USDA regulation. In March 2013, the court vacated the suit on the basis that the objections should have been raised in 2007 when the regulation was first proposed.
Uses
Nutrition
Almonds are 4% water, 22% carbohydrates, 21% protein, and 50% fat (table). In a reference amount, almonds supply of food energy. The almond is a nutritionally dense food (table), providing a rich source (20% or more of the Daily Value, DV) of the B vitamins riboflavin and niacin, vitamin E, and the essential minerals calcium, copper, iron, magnesium, manganese, phosphorus, and zinc. Almonds are a moderate source (10–19% DV) of the B vitamins thiamine, vitamin B6, and folate, choline, and the essential mineral potassium. They also contain substantial dietary fiber, the monounsaturated fat, oleic acid, and the polyunsaturated fat, linoleic acid. Typical of nuts and seeds, almonds are a source of phytosterols such as beta-sitosterol, stigmasterol, campesterol, sitostanol, and campestanol.
Health
Almonds are included as a good source of protein among recommended healthy foods by the U.S. Department of Agriculture (USDA). A 2016 review of clinical research indicated that regular consumption of almonds may reduce the risk of heart disease by lowering blood levels of LDL cholesterol.
Culinary
While the almond is often eaten on its own, raw or toasted, it is also a component of various dishes. Almonds are available in many forms, such as whole, slivered, and ground into flour. Almond pieces around in size, called "nibs", are used for special purposes such as decoration.
Almonds are a common addition to breakfast muesli or oatmeal.
Desserts
A wide range of classic sweets feature almonds as a central ingredient. Marzipan was developed in the Middle Ages. Since the 19th century almonds have been used to make bread, almond butter, cakes and puddings, candied confections, almond cream-filled pastries, nougat, cookies (macaroons, biscotti and qurabiya), and cakes (financiers, Esterházy torte), and other sweets and desserts.
The young, developing fruit of the almond tree can be eaten whole (green almonds) when they are still green and fleshy on the outside and the inner shell has not yet hardened. The fruit is somewhat sour, but is a popular snack in parts of the Middle East, eaten dipped in salt to balance the sour taste. Also in the Middle East they are often eaten with dates. They are available only from mid-April to mid-June in the Northern Hemisphere; pickling or brining extends the fruit's shelf life.
Marzipan
Marzipan, a smooth, sweetened almond paste, is used in a number of elegant cakes and desserts. Princess cake is covered by marzipan (similar to fondant), as is Battenberg cake. In Sicily, sponge cake is covered with marzipan to make cassatella di sant'Agata and cassata siciliana, and marzipan is dyed and crafted into realistic fruit shapes to make frutta martorana. The Andalusian Christmas pastry pan de Cádiz is filled with marzipan and candied fruit.
World cuisines
In French cuisine, alternating layers of almond and hazelnut meringue are used to make the dessert dacquoise. Pithivier is one of many almond cream-filled pastries.
In Germany, Easter bread called Deutsches Osterbrot is baked with raisins and almonds.
In Greece almond flour is used to make amygdalopita, a glyka tapsiou dessert cake baking in a tray. Almonds are used for kourabiedes, a Greek version of the traditional quarabiya almond biscuits. A soft drink known as soumada is made from almonds in various regions.
In Saudi Arabia, almonds are a typical embellishment for the rice dish kabsa.
In Iran, green almonds are dipped in sea salt and eaten as snacks on street markets; they are called chaqale bâdam. Candied almonds called noghl are served alongside tea and coffee. Also, sweet almonds are used to prepare special food for babies, named harire badam. Almonds are added to some foods, cookies, and desserts, or are used to decorate foods. People in Iran consume roasted nuts for special events, for example, during New Year (Nowruz) parties.
In Italy, colomba di Pasqua is a traditional Easter cake made with almonds. Bitter almonds are the base for amaretti cookies, a common dessert. Almonds are also a common choice as the nuts to include in torrone.
In Morocco, almonds in the form of sweet almond paste are the main ingredient in pastry fillings, and several other desserts. Fried blanched whole almonds are also used to decorate sweet tajines such as lamb with prunes. Southwestern Berber regions of Essaouira and Souss are also known for amlou, a spread made of almond paste, argan oil, and honey. Almond paste is also mixed with toasted flour and among others, honey, olive oil or butter, anise, fennel, sesame seeds, and cinnamon to make sellou (also called zamita in Meknes or slilou in Marrakech), a sweet snack known for its long shelf life and high nutritive value.
In Indian cuisine, almonds are the base ingredients of pasanda-style and Mughlai curries. Badam halva is a sweet made from almonds with added coloring. Almond flakes are added to many sweets (such as sohan barfi), and are usually visible sticking to the outer surface. Almonds form the base of various drinks which are supposed to have cooling properties. Almond sherbet or sherbet-e-badaam, is a popular summer drink. Almonds are also sold as a snack with added salt.
In Israel almonds are used as a topping for tahini cookies or eaten as a snack.
In Spain Marcona almonds are usually toasted in oil and lightly salted. They are used by Spanish confectioners to prepare a sweet called turrón.
In Arabian cuisine, almonds are commonly used as garnishing for Mansaf.
Certain natural food stores sell "bitter almonds" or "apricot kernels" labeled as such, requiring significant caution by consumers for how to prepare and eat these products.
Milk
Almonds can be processed into a milk substitute called almond milk; the nut's soft texture, mild flavor, and light coloring (when skinned) make for an efficient analog to dairy, and a soy-free choice for lactose intolerant people and vegans. Raw, blanched, and lightly toasted almonds work well for different production techniques, some of which are similar to that of soy milk and some of which use no heat, resulting in raw milk.
Almond milk, along with almond butter and almond oil, are versatile products used in both sweet and savoury dishes.
In Moroccan cuisine, sharbat billooz, a common beverage, is made by blending blanched almonds with milk, sugar and other flavorings.
Flour and skins
Almond flour or ground almond meal combined with sugar or honey as marzipan is often used as a gluten-free alternative to wheat flour in cooking and baking.
Almonds contain polyphenols in their skins consisting of flavonols, flavan-3-ols, hydroxybenzoic acids and flavanones analogous to those of certain fruits and vegetables. These phenolic compounds and almond skin prebiotic dietary fiber have commercial interest as food additives or dietary supplements.
Syrup
Historically, almond syrup was an emulsion of sweet and bitter almonds, usually made with barley syrup (orgeat syrup) or in a syrup of orange flower water and sugar, often flavored with a synthetic aroma of almonds. Orgeat syrup is an important ingredient in the Mai Tai and many other Tiki drinks.
Due to the cyanide found in bitter almonds, modern syrups generally are produced only from sweet almonds. Such syrup products do not contain significant levels of hydrocyanic acid, so are generally considered safe for human consumption.
Oils
Almonds are a rich source of oil, with 50% of kernel dry mass as fat (whole almond nutrition table). In relation to total dry mass of the kernel, almond oil contains 32% monounsaturated oleic acid (an omega-9 fatty acid), 13% linoleic acid (a polyunsaturated omega-6 essential fatty acid), and 10% saturated fatty acid (mainly as palmitic acid). Linolenic acid, a polyunsaturated omega-3 fat, is not present (table). Almond oil is a rich source of vitamin E, providing 261% of the Daily Value per 100 millilitres.
When almond oil is analyzed separately and expressed per 100 grams as a reference mass, the oil provides of food energy, 8 grams of saturated fat (81% of which is palmitic acid), 70 grams of oleic acid, and 17 grams of linoleic acid (oil table).
Oleum amygdalae, the fixed oil, is prepared from either sweet or bitter almonds, and is a glyceryl oleate with a slight odour and a nutty taste. It is almost insoluble in alcohol but readily soluble in chloroform or ether. Almond oil is obtained from the dried kernel of almonds. Sweet almond oil is used as a carrier oil in aromatherapy and cosmetics while bitter almond oil, containing benzaldehyde, is used as a food flavouring and in perfume.
In culture
The almond is highly revered in some cultures. The tree originated in the Middle East. In the Bible, the almond is mentioned ten times, beginning with Genesis 43:11, where it is described as "among the best of fruits". In Numbers 17, Levi is chosen from the other tribes of Israel by Aaron's rod, which brought forth almond flowers. The almond blossom supplied a model for the menorah which stood in the Holy Temple, "Three cups, shaped like almond blossoms, were on one branch, with a knob and a flower; and three cups, shaped like almond blossoms, were on the other … on the candlestick itself were four cups, shaped like almond blossoms, with its knobs and flowers" (Exodus 25:33–34; 37:19–20). Many Sephardic Jews give five almonds to each guest before special occasions like weddings.
Similarly, Christian symbolism often uses almond branches as a symbol of the virgin birth of Jesus; paintings and icons often include almond-shaped haloes encircling the Christ Child and as a symbol of Mary. The word "luz", which appears in Genesis 30:37, sometimes translated as "hazel", may actually be derived from the Aramaic name for almond (Luz), and is translated as such in the New International Version and other versions of the Bible. The Arabic name for almond is لوز "lauz" or "lūz". In some parts of the Levant and North Africa, it is pronounced "loz", which is very close to its Aramaic origin.
The Entrance of the flower (La entrada de la flor) is an event celebrated on 1 February in Torrent, Spain, in which the clavarios and members of the Confrerie of the Mother of God deliver a branch of the first-blooming almond-tree to the Virgin.
See also
Fruit tree forms
Fruit tree propagation
Fruit tree pruning
List of almond dishes
List of edible seeds
References
External links
University of California Fruit and Nut Research and Information Center
Benefits of Soaked Almonds
Almond
Edible nuts and seeds
Flora of Asia
Pollination management
Snack foods
Almond oil
Crops
Fruit trees
Symbols of California
Flora of Lebanon and Syria
|
https://en.wikipedia.org/wiki/Analysis
|
Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development.
The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses.
As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name).
The converse of analysis is synthesis: putting the pieces back together again in a new or different whole.
Applications
Science
The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device.
Types of Analysis:
A) Qualitative Analysis: It is concerned with which components are in a given sample or compound.
Example: Precipitation reaction
B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound.
Example: To find concentration by uv-spectrophotometer.
Isotopes
Chemists can use isotope analysis to assist analysts with issues in anthropology, archeology, food chemistry, forensics, geology, and a host of other questions of physical science. Analysts can discern the origins of natural and man-made isotopes in the study of environmental radioactivity.
Business
Financial statement analysis – the analysis of the accounts and the economic prospects of a firm
Financial analysis – refers to an assessment of the viability, stability, and profitability of a business, sub-business or project
Gap analysis – involves the comparison of actual performance with potential or desired performance of an organization
Business analysis – involves identifying the needs and determining the solutions to business problems
Price analysis – involves the breakdown of a price to a unit figure
Market analysis – consists of suppliers and customers, and price is determined by the interaction of supply and demand
Sum-of-the-parts analysis – method of valuation of a multi-divisional company
Opportunity analysis – consists of customers trends within the industry, customer demand and experience determine purchasing behavior
Computer science
Requirements analysis – encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users.
Competitive analysis (online algorithm) – shows how online algorithms perform and demonstrates the power of randomization in algorithms
Lexical analysis – the process of processing an input sequence of characters and producing as output a sequence of symbols
Object-oriented analysis and design – à la Booch
Program analysis (computer science) – the process of automatically analysing the behavior of computer programs
Semantic analysis (computer science) – a pass by a compiler that adds semantical information to the parse tree and performs certain checks
Static code analysis – the analysis of computer software that is performed without actually executing programs built from that
Structured systems analysis and design methodology – à la Yourdon
Syntax analysis – a process in compilers that recognizes the structure of programming languages, also known as parsing
Worst-case execution time – determines the longest time that a piece of software can take to run
Economics
Agroecosystem analysis
Input–output model if applied to a region, is called Regional Impact Multiplier System
Engineering
Analysts in the field of engineering look at requirements, structures, mechanisms, systems and dimensions. Electrical engineers analyse systems in electronics. Life cycles and system failures are broken down and studied by engineers. It is also looking at different factors incorporated within the design.
Intelligence
The field of intelligence employs analysts to break down and understand a wide array of questions. Intelligence agencies may use heuristics, inductive and deductive reasoning, social network analysis, dynamic network analysis, link analysis, and brainstorming to sort through problems they face. Military intelligence may explore issues through the use of game theory, Red Teaming, and wargaming. Signals intelligence applies cryptanalysis and frequency analysis to break codes and ciphers. Business intelligence applies theories of competitive intelligence analysis and competitor analysis to resolve questions in the marketplace. Law enforcement intelligence applies a number of theories in crime analysis.
Linguistics
Linguistics explores individual languages and language in general. It breaks language down and analyses its component parts: theory, sounds and their meaning, utterance usage, word origins, the history of words, the meaning of words and word combinations, sentence construction, basic construction beyond the sentence level, stylistics, and conversation. It examines the above using statistics and modeling, and semantics. It analyses language in context of anthropology, biology, evolution, geography, history, neurology, psychology, and sociology. It also takes the applied approach, looking at individual language development and clinical issues.
Literature
Literary criticism is the analysis of literature. The focus can be as diverse as the analysis of Homer or Freud. While not all literary-critical methods are primarily analytical in nature, the main approach to the teaching of literature in the west since the mid-twentieth century, literary formal analysis or close reading, is. This method, rooted in the academic movement labelled The New Criticism, approaches texts – chiefly short poems such as sonnets, which by virtue of their small size and significant complexity lend themselves well to this type of analysis – as units of discourse that can be understood in themselves, without reference to biographical or historical frameworks. This method of analysis breaks up the text linguistically in a study of prosody (the formal analysis of meter) and phonic effects such as alliteration and rhyme, and cognitively in examination of the interplay of syntactic structures, figurative language, and other elements of the poem that work to produce its larger effects.
Mathematics
Modern mathematical analysis is the study of infinite processes. It is the branch of mathematics that includes calculus. It can be applied in the study of classical concepts of mathematics, such as real numbers, complex variables, trigonometric functions, and algorithms, or of non-classical concepts like constructivism, harmonics, infinity, and vectors.
Florian Cajori explains in A History of Mathematics (1893) the difference between modern and ancient mathematical analysis, as distinct from logical analysis, as follows:
The terms synthesis and analysis are used in mathematics in a more special sense than in logic. In ancient mathematics they had a different meaning from what they now have. The oldest definition of mathematical analysis as opposed to synthesis is that given in [appended to] Euclid, XIII. 5, which in all probability was framed by Eudoxus: "Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth; synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it."
The analytic method is not conclusive, unless all operations involved in it are known to be reversible. To remove all doubt, the Greeks, as a rule, added to the analytic process a synthetic one, consisting of a reversion of all operations occurring in the analysis. Thus the aim of analysis was to aid in the discovery of synthetic proofs or solutions.
James Gow uses a similar argument as Cajori, with the following clarification, in his A Short History of Greek Mathematics (1884):
The synthetic proof proceeds by shewing that the proposed new truth involves certain admitted truths. An analytic proof begins by an assumption, upon which a synthetic reasoning is founded. The Greeks distinguished theoretic from problematic analysis. A theoretic analysis is of the following kind. To prove that A is B, assume first that A is B. If so, then, since B is C and C is D and D is E, therefore A is E. If this be known a falsity, A is not B. But if this be a known truth and all the intermediate propositions be convertible, then the reverse process, A is E, E is D, D is C, C is B, therefore A is B, constitutes a synthetic proof of the original theorem. Problematic analysis is applied in all cases where it is proposed to construct a figure which is assumed to satisfy a given condition. The problem is then converted into some theorem which is involved in the condition and which is proved synthetically, and the steps of this synthetic proof taken backwards are a synthetic solution of the problem.
Music
Musical analysis – a process attempting to answer the question "How does this music work?"
Musical Analysis is a study of how the composers use the notes together to compose music. Those studying music will find differences with each composer's musical analysis, which differs depending on the culture and history of music studied. An analysis of music is meant to simplify the music for you.
Schenkerian analysis
Schenkerian analysis is a collection of music analysis that focuses on the production of the graphic representation. This includes both analytical procedure as well as the notational style. Simply put, it analyzes tonal music which includes all chords and tones within a composition.
Philosophy
Philosophical analysis – a general term for the techniques used by philosophers
Philosophical analysis refers to the clarification and composition of words put together and the entailed meaning behind them. Philosophical analysis dives deeper into the meaning of words and seeks to clarify that meaning by contrasting the various definitions. It is the study of reality, justification of claims, and the analysis of various concepts. Branches of philosophy include logic, justification, metaphysics, values and ethics. If questions can be answered empirically, meaning it can be answered by using the senses, then it is not considered philosophical. Non-philosophical questions also include events that happened in the past, or questions science or mathematics can answer.
Analysis is the name of a prominent journal in philosophy.
Psychotherapy
Psychoanalysis – seeks to elucidate connections among unconscious components of patients' mental processes
Transactional analysis
Transactional analysis is used by therapists to try to gain a better understanding of the unconscious. It focuses on understanding and intervening human behavior.
Policy
Policy analysis – The use of statistical data to predict the effects of policy decisions made by governments and agencies
Policy analysis includes a systematic process to find the most efficient and effective option to address the current situation.
Qualitative analysis – The use of anecdotal evidence to predict the effects of policy decisions or, more generally, influence policy decisions
Signal processing
Finite element analysis – a computer simulation technique used in engineering analysis
Independent component analysis
Link quality analysis – the analysis of signal quality
Path quality analysis
Fourier analysis
Statistics
In statistics, the term analysis may refer to any method used
for data analysis. Among the many such methods, some are:
Analysis of variance (ANOVA) – a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts
Boolean analysis – a method to find deterministic dependencies between variables in a sample, mostly used in exploratory data analysis
Cluster analysis – techniques for finding groups (called clusters), based on some measure of proximity or similarity
Factor analysis – a method to construct models describing a data set of observed variables in terms of a smaller set of unobserved variables (called factors)
Meta-analysis – combines the results of several studies that address a set of related research hypotheses
Multivariate analysis – analysis of data involving several variables, such as by factor analysis, regression analysis, or principal component analysis
Principal component analysis – transformation of a sample of correlated variables into uncorrelated variables (called principal components), mostly used in exploratory data analysis
Regression analysis – techniques for analysing the relationships between several predictive variables and one or more outcomes in the data
Scale analysis (statistics) – methods to analyse survey data by scoring responses on a numeric scale
Sensitivity analysis – the study of how the variation in the output of a model depends on variations in the inputs
Sequential analysis – evaluation of sampled data as it is collected, until the criterion of a stopping rule is met
Spatial analysis – the study of entities using geometric or geographic properties
Time-series analysis – methods that attempt to understand a sequence of data points spaced apart at uniform time intervals
Other
Aura analysis – a technique in which supporters of the method claim that the body's aura, or energy field is analysed
Bowling analysis – Analysis of the performance of cricket players
Lithic analysis – the analysis of stone tools using basic scientific techniques
Lithic analysis is most often used by archeologists in determining which types of tools were used at a given time period pertaining to current artifacts discovered.
Protocol analysis – a means for extracting persons' thoughts while they are performing a task
See also
Formal analysis
Metabolism in biology
Methodology
Scientific method
References
External links
Abstraction
Critical thinking skills
Emergence
Empiricism
Epistemological theories
Intelligence
Mathematical modeling
Metaphysics of mind
Methodology
Ontology
Philosophy of logic
Rationalism
Reasoning
Research methods
Scientific method
Theory of mind
|
https://en.wikipedia.org/wiki/Automorphism
|
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Definition
In the context of abstract algebra, a mathematical object is an algebraic structure such as a group, ring, or vector space. An automorphism is simply a bijective homomorphism of an object with itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.)
The identity morphism (identity mapping) is called the trivial automorphism in some contexts. Respectively, other (non-identity) automorphisms are called nontrivial automorphisms.
The exact definition of an automorphism depends on the type of "mathematical object" in question and what, precisely, constitutes an "isomorphism" of that object. The most general setting in which these words have meaning is an abstract branch of mathematics called category theory. Category theory deals with abstract objects and morphisms between those objects.
In category theory, an automorphism is an endomorphism (i.e., a morphism from an object to itself) which is also an isomorphism (in the categorical sense of the word, meaning there exists a right and left inverse endomorphism).
This is a very abstract definition since, in category theory, morphisms are not necessarily functions and objects are not necessarily sets. In most concrete settings, however, the objects will be sets with some additional structure and the morphisms will be functions preserving that structure.
Automorphism group
If the automorphisms of an object form a set (instead of a proper class), then they form a group under composition of morphisms. This group is called the automorphism group of .
Closure Composition of two automorphisms is another automorphism.
Associativity It is part of the definition of a category that composition of morphisms is associative.
Identity The identity is the identity morphism from an object to itself, which is an automorphism.
Inverses By definition every isomorphism has an inverse that is also an isomorphism, and since the inverse is also an endomorphism of the same object it is an automorphism.
The automorphism group of an object X in a category C is denoted AutC(X), or simply Aut(X) if the category is clear from context.
Examples
In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X.
In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field.
A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group.
In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V). (The algebraic structure of all endomorphisms of V is itself an algebra over the same base field as V, whose invertible elements precisely consist of GL(V).)
A field automorphism is a bijective ring homomorphism from a field to itself. In the cases of the rational numbers (Q) and the real numbers (R) there are no nontrivial field automorphisms. Some subfields of R have nontrivial field automorphisms, which however do not extend to all of R (because they cannot preserve the property of a number having a square root in R). In the case of the complex numbers, C, there is a unique nontrivial automorphism that sends R into R: complex conjugation, but there are infinitely (uncountably) many "wild" automorphisms (assuming the axiom of choice). Field automorphisms are important to the theory of field extensions, in particular Galois extensions. In the case of a Galois extension L/K the subgroup of all automorphisms of L fixing K pointwise is called the Galois group of the extension.
The automorphism group of the quaternions (H) as a ring are the inner automorphisms, by the Skolem–Noether theorem: maps of the form . This group is isomorphic to SO(3), the group of rotations in 3-dimensional space.
The automorphism group of the octonions (O) is the exceptional Lie group G2.
In graph theory an automorphism of a graph is a permutation of the nodes that preserves edges and non-edges. In particular, if two nodes are joined by an edge, so are their images under the permutation.
In geometry, an automorphism may be called a motion of the space. Specialized terminology is also used:
In metric geometry an automorphism is a self-isometry. The automorphism group is also called the isometry group.
In the category of Riemann surfaces, an automorphism is a biholomorphic map (also called a conformal map), from a surface to itself. For example, the automorphisms of the Riemann sphere are Möbius transformations.
An automorphism of a differentiable manifold M is a diffeomorphism from M to itself. The automorphism group is sometimes denoted Diff(M).
In topology, morphisms between topological spaces are called continuous maps, and an automorphism of a topological space is a homeomorphism of the space to itself, or self-homeomorphism (see homeomorphism group). In this example it is not sufficient for a morphism to be bijective to be an isomorphism.
History
One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus, where he discovered an order two automorphism, writing:
so that is a new fifth root of unity, connected with the former fifth root by relations of perfect reciprocity.
Inner and outer automorphisms
In some categories—notably groups, rings, and Lie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms.
In the case of groups, the inner automorphisms are the conjugations by the elements of the group itself. For each element a of a group G, conjugation by a is the operation given by (or a−1ga; usage varies). One can easily check that conjugation by a is a group automorphism. The inner automorphisms form a normal subgroup of Aut(G), denoted by Inn(G); this is called Goursat's lemma.
The other automorphisms are called outer automorphisms. The quotient group is usually denoted by Out(G); the non-trivial elements are the cosets that contain the outer automorphisms.
The same definition holds in any unital ring or algebra where a is any invertible element. For Lie algebras the definition is slightly different.
See also
Antiautomorphism
Automorphism (in Sudoku puzzles)
Characteristic subgroup
Endomorphism ring
Frobenius automorphism
Morphism
Order automorphism (in order theory).
Relation-preserving automorphism
Fractional Fourier transform
References
External links
Automorphism at Encyclopaedia of Mathematics
Morphisms
Abstract algebra
Symmetry
|
https://en.wikipedia.org/wiki/Architect
|
An architect is a person who plans, designs and oversees the construction of buildings. To practice architecture means to provide services in connection with the design of buildings and the space within the site surrounding the buildings that have human occupancy or use as their principal purpose. Etymologically, the term architect derives from the Latin , which derives from the Greek (-, chief + , builder), i.e., chief builder.
The professional requirements for architects vary from location to location. An architect's decisions affect public safety and thus the architect must undergo specialized training consisting of advanced education and a practicum (or internship) for practical experience to earn a license to practice architecture. Practical, technical, and academic requirements for becoming an architect vary by jurisdiction though the formal study of architecture in academic institutions has played a pivotal role in the development of the profession.
Origins
Throughout ancient and medieval history, most architectural design and construction was carried out by artisans—such as stone masons and carpenters, rising to the role of master builder. Until modern times, there was no clear distinction between architect and engineer. In Europe, the titles architect and engineer were primarily geographical variations that referred to the same person, often used interchangeably.
"Architect" derives from Greek (, "master builder", "chief ).
It is suggested that various developments in technology and mathematics allowed the development of the professional 'gentleman' architect, separate from the hands-on craftsman. Paper was not used in Europe for drawing until the 15th century, but became increasingly available after 1500. Pencils were used for drawing by 1600. The availability of both paper and pencils allowed pre-construction drawings to be made by professionals. Concurrently, the introduction of linear perspective and innovations such as the use of different projections to describe a three-dimensional building in two dimensions, together with an increased understanding of dimensional accuracy, helped building designers communicate their ideas. However, development was gradual and slow going. Until the 18th-century, buildings continued to be designed and set out by craftsmen, with the exception of high-status projects.
Architecture
In most developed countries, only those qualified with an appropriate license, certification, or registration with a relevant body (often governmental), may legally practice architecture. Such licensure usually required a university degree, successful completion of exams, as well as a training period. Representation of oneself as an architect through the use of terms and titles were restricted to licensed individuals by law, although in general, derivatives such as architectural designer were not legally protected.
To practice architecture implies the ability to practice independently of supervision. The term building design professional (or design professional), by contrast, is a much broader term that includes professionals who practice independently under an alternate profession such as engineering professionals, or those who assist in the practice of architecture under the supervision of a licensed architect such as intern architects. In many places, independent, non-licensed individuals, may perform design services outside the professional restrictions such as the design houses or other smaller structures.
Practice
In the architectural profession, technical and environmental knowledge, design, and construction management, require an understanding of business as well as design. However, design is the driving force throughout the project and beyond. An architect accepts a commission from a client. The commission might involve preparing feasibility reports, building audits, designing a building or several buildings, structures, and the spaces among them. The architect participates in developing the requirements the client wants in the building. Throughout the project (planning to occupancy), the architect coordinates a design team. Structural, mechanical, and electrical engineers are hired by the client or architect, who must ensure that the work is coordinated to construct the design.
Design role
The architect, once hired by a client, is responsible for creating a design concept that meets the requirements of that client and provides a facility suitable to the required use. The architect must meet with and put questions to the client, in order to ascertain all the requirements (and nuances) of the planned project.
Often the full brief is not clear in the beginning. It involves a degree of risk in the design undertaking. The architect may make early proposals to the client which may rework the terms of the brief. The "program" (or brief) is essential to producing a project that meets all the needs of the owner. This becomes a guide for the architect in creating the design concept.
Design proposal(s) are generally expected to be both imaginative and pragmatic. Much depends upon the time, place, finance, culture, and available crafts and technology in which the design takes place. The extent and nature of these expectations will vary. Foresight is a prerequisite when designing buildings as it is a very complex and demanding undertaking.
Any design concept during the early stage of its generation must take into account a great number of issues and variables including qualities of space(s), the end-use and life-cycle of these proposed spaces, connections, relations, and aspects between spaces including how they are put together and the impact of proposals on the immediate and wider locality. Selection of appropriate materials and technology must be considered, tested, and reviewed at an early stage in the design to ensure there are no setbacks (such as higher-than-expected costs) which could occur later in the project.
The site and its surrounding environment as well as the culture and history of the place, will also influence the design. The design must also balance increasing concerns with environmental sustainability. The architect may introduce (intentionally or not), aspects of mathematics and architecture, new or current architectural theory, or references to architectural history.
A key part of the design is that the architect often must consult with engineers, surveyors and other specialists throughout the design, ensuring that aspects such as structural supports and air conditioning elements are coordinated. The control and planning of construction costs are also a part of these consultations. Coordination of the different aspects involves a high degree of specialized communication including advanced computer technology such as building information modeling (BIM), computer-aided design (CAD), and cloud-based technologies. Finally, at all times, the architect must report back to the client who may have reservations or recommendations which might introduce further variables into the design.
Architects also deal with local and federal jurisdictions regarding regulations and building codes. The architect might need to comply with local planning and zoning laws such as required setbacks, height limitations, parking requirements, transparency requirements (windows), and land use. Some jurisdictions require adherence to design and historic preservation guidelines. Health and safety risks form a vital part of the current design, and in some jurisdictions, design reports and records are required to include ongoing considerations of materials and contaminants, waste management and recycling, traffic control, and fire safety.
Means of design
Previously, architects employed drawings to illustrate and generate design proposals. While conceptual sketches are still widely used by architects, computer technology has now become the industry standard. Furthermore, design may include the use of photos, collages, prints, linocuts, 3D scanning technology, and other media in design production.
Increasingly, computer software is shaping how architects work. BIM technology allows for the creation of a virtual building that serves as an information database for the sharing of design and building information throughout the life-cycle of the building's design, construction, and maintenance. Virtual reality (VR) presentations are becoming more common for visualizing structural designs and interior spaces from the point-of-view perspective.
Environmental role
Since modern buildings are known to place carbon into the atmosphere, increasing controls are being placed on buildings and associated technology to reduce emissions, increase energy efficiency, and make use of renewable energy sources. Renewable energy sources may be designed into the proposed building by local or national renewable energy providers. As a result, the architect is required to remain abreast of current regulations that are continually being updated. Some new developments exhibit extremely low energy use or passive solar building design.
However, the architect is also increasingly being required to provide initiatives in a wider environmental sense. Examples of this include making provisions for low-energy transport, natural daylighting instead of artificial lighting, natural ventilation instead of air conditioning, pollution, and waste management, use of recycled materials, and employment of materials which can be easily recycled.
Construction role
As the design becomes more advanced and detailed, specifications and detail designs are made of all the elements and components of the building. Techniques in the production of a building are continually advancing which places a demand on the architect to ensure that he or she remains up to date with these advances.
Depending on the client's needs and the jurisdiction's requirements, the spectrum of the architect's services during each construction stage may be extensive (detailed document preparation and construction review) or less involved (such as allowing a contractor to exercise considerable design-build functions).
Architects typically put projects to tender on behalf of their clients, advise them on the award of the project to a general contractor, facilitate and administer a contract of agreement which is often between the client and the contractor. This contract is legally binding and covers a wide range of aspects including the insurance and commitments of all stakeholders, the status of the design documents, provisions for the architect's access, and procedures for the control of the works as they proceed. Depending on the type of contract utilized, provisions for further sub-contract tenders may be required. The architect may require that some elements are covered by a warranty which specifies the expected life and other aspects of the material, product, or work.
In most jurisdictions, prior notification to the relevant authority must be given before commencement of the project, giving the local authority notice to carry out independent inspections. The architect will then review and inspect the progress of the work in coordination with the local authority.
The architect will typically review contractor shop drawings and other submittals, prepare and issue site instructions, and provide Certificates for Payment to the contractor (see also Design-bid-build) which is based on the work done as well as any materials and other goods purchased or hired in the future. In the United Kingdom and other countries, a quantity surveyor is often part of the team to provide cost consulting. With large, complex projects, an independent construction manager is sometimes hired to assist in the design and management of the construction.
In many jurisdictions, mandatory certification or assurance of the completed work or part of works, is required. This demand for certification entails a high degree of risk; therefore, regular inspections of the work as it progresses on site is required to ensure that the design is in compliance itself as well as following all relevant statutes and permissions.
Alternate practice and specializations
Recent decades have seen the rise of specializations within the profession. Many architects and architectural firms focus on certain project types (e.g. healthcare, retail, public housing, and event management), technological expertise, or project delivery methods. Some architects specialize in building code, building envelope, sustainable design, technical writing, historic preservation(US) or conservation (UK), and accessibility.
Many architects elect to move into real estate (property) development, corporate facilities planning, project management, construction management, chief sustainability officers interior design, city planning, user experience design, and design research.
Professional requirements
Although there are variations in each location, most of the world's architects are required to register with the appropriate jurisdiction. Architects are typically required to meet three common requirements: education, experience, and examination.
Basic educational requirement generally consist of a university degree in architecture. The experience requirement for degree candidates is usually satisfied by a practicum or internship (usually two to three years). Finally, a Registration Examination or a series of exams is required prior to licensure.
Professionals who engaged in the design and supervision of construction projects prior to the late 19th century were not necessarily trained in a separate architecture program in an academic setting. Instead, they often trained under established architects. Prior to modern times, there was no distinction between architects and engineers and the title used varied depending on geographical location. They often carried the title of master builder or surveyor after serving a number of years as an apprentice (such as Sir Christopher Wren). The formal study of architecture in academic institutions played a pivotal role in the development of the profession as a whole, serving as a focal point for advances in architectural technology and theory. The use of "Architect" or abbreviations such as "Ar." as a title attached to a person's name was regulated by law in some countries.
Fees
Architects' fee structure was typically based on a percentage of construction value, as a rate per unit area of the proposed construction, hourly rates, or a fixed lump sum fee. Combination of these structures was also common. Fixed fees were usually based on a project's allocated construction cost and could range between 4 and 12% of new construction cost for commercial and institutional projects, depending on a project's size and complexity. Residential projects ranged from 12 to 20%. Renovation projects typically commanded higher percentages such as 15-20%.
Overall billings for architectural firms range widely, depending on their location and economic climate. Billings have traditionally been dependent on the local economic conditions, but with rapid globalization, this is becoming less of a factor for large international firms. Salaries could also vary depending on experience, position within the firm (i.e. staff architect, partner, or shareholder, etc.), and the size and location of the firm.
Professional organizations
A number of national professional organizations exist to promote career and business development in architecture.
The International Union of Architects (UIA)
The American Institute of Architects (AIA) US
Royal Institute of British Architects (RIBA) UK
Architects Registration Board (ARB) UK
The Australian Institute of Architects (AIA) Australia
The South African Institute of Architects (SAIA) South Africa
Association of Consultant Architects (ACA) UK
Association of Licensed Architects (ALA) US
The Consejo Profesional de Arquitectura y Urbanismo (CPAU) Argentina
Indian Institute of Architects (IIA) & Council of Architecture (COA) India
The National Organization of Minority Architects (NOMA) US
Prizes and awards
A wide variety of prizes is awarded by national professional associations and other bodies, recognizing accomplished architects, their buildings, structures, and professional careers.
The most lucrative award an architect can receive is the Pritzker Prize, sometimes termed the "Nobel Prize for architecture." The inaugural Pritzker Prize winner was Philip Johnson who was cited "for 50 years of imagination and vitality embodied in a myriad of museums, theatres libraries, houses gardens and corporate structure". The Pritzker Prize has been awarded for forty-two straight editions without interruption, and there are now 22 countries with at least one winning architect. Other prestigious architectural awards are the Royal Gold Medal, the AIA Gold Medal (US), AIA Gold Medal (Australia), and the Praemium Imperiale.
Architects in the UK, who have made contributions to the profession through design excellence or architectural education, or have in some other way advanced the profession, might until 1971 be elected Fellows of the Royal Institute of British Architects and can write FRIBA after their name if they feel so inclined. Those elected to chartered membership of the RIBA after 1971 may use the initials RIBA but cannot use the old ARIBA and FRIBA. An Honorary Fellow may use the initials, Hon. FRIBA. and an International Fellow may use the initials Int. FRIBA. Architects in the US, who have made contributions to the profession through design excellence or architectural education, or have in some other way advanced the profession, are elected Fellows of the American Institute of Architects and can write FAIA after their name. Architects in Canada, who have made outstanding contributions to the profession through contribution to research, scholarship, public service, or professional standing to the good of architecture in Canada, or elsewhere, may be recognized as a Fellow of the Royal Architectural Institute of Canada and can write FRAIC after their name. In Hong Kong, those elected to chartered membership may use the initial HKIA, and those who have made a special contribution after nomination and election by The Hong Kong Institute of Architects (HKIA), may be elected as fellow members of HKIA and may use FHKIA after their name.
See also
References
Architecture occupations
Professional certification in architecture
|
https://en.wikipedia.org/wiki/Astrometry
|
Astrometry is a branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. It provides the kinematics and physical origin of the Solar System and this galaxy, the Milky Way.
History
The history of astrometry is linked to the history of star catalogues, which gave astronomers reference points for objects in the sky so they could track their movements. This can be dated back to Hipparchus, who around 190 BC used the catalogue of his predecessors Timocharis and Aristillus to discover Earth's precession. In doing so, he also developed the brightness scale still in use today. Hipparchus compiled a catalogue with at least 850 stars and their positions. Hipparchus's successor, Ptolemy, included a catalogue of 1,022 stars in his work the Almagest, giving their location, coordinates, and brightness.
In the 10th century, Abd al-Rahman al-Sufi carried out observations on the stars and described their positions, magnitudes and star color; furthermore, he provided drawings for each constellation, which are depicted in his Book of Fixed Stars. Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe with a diameter of nearly 1.4 metres. His observations on eclipses were still used centuries later in Simon Newcomb's investigations on the motion of the Moon, while his other observations of the motions of the planets Jupiter and Saturn inspired Laplace's Obliquity of the Ecliptic and Inequalities of Jupiter and Saturn. In the 15th century, the Timurid astronomer Ulugh Beg compiled the Zij-i-Sultani, in which he catalogued 1,019 stars. Like the earlier catalogs of Hipparchus and Ptolemy, Ulugh Beg's catalogue is estimated to have been precise to within approximately 20 minutes of arc.
In the 16th century, Tycho Brahe used improved instruments, including large mural instruments, to measure star positions more accurately than previously, with a precision of 15–35 arcsec. Taqi al-Din measured the right ascension of the stars at the Constantinople Observatory of Taqi ad-Din using the "observational clock" he invented. When telescopes became commonplace, setting circles sped measurements
James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of the Earth's axis. His cataloguing of 3222 stars was refined in 1807 by Friedrich Bessel, the father of modern astrometry. He made the first measurement of stellar parallax: 0.3 arcsec for the binary star 61 Cygni. In 1872, William Huggins used spectroscopy to measure the radial velocity of several prominent stars, including Sirius.
Being very difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century, mostly by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. Started in the late 19th century, the project Carte du Ciel to improve star mapping couldn't be finished but made photography a common technique for astrometry. In the 1980s, charge-coupled devices (CCDs) replaced photographic plates and reduced optical uncertainties to one milliarcsecond. This technology made astrometry less expensive, opening the field to an amateur audience.
In 1989, the European Space Agency's Hipparcos satellite took astrometry into orbit, where it could be less affected by mechanical forces of the Earth and optical distortions from its atmosphere. Operated from 1989 to 1993, Hipparcos measured large and small angles on the sky with much greater precision than any previous optical telescopes. During its 4-year run, the positions, parallaxes, and proper motions of 118,218 stars were determined with an unprecedented degree of accuracy. A new "Tycho catalog" drew together a database of 1,058,332 stars to within 20-30 mas (milliarcseconds). Additional catalogues were compiled for the 23,882 double and multiple stars and 11,597 variable stars also analyzed during the Hipparcos mission.
In 2013, the Gaia satellite was launched and improved the accuracy of Hipparcos.
The precision was improved by a factor of 100 and enabled the mapping of a billion stars.
Today, the catalogue most often used is USNO-B1.0, an all-sky catalogue that tracks proper motions, positions, magnitudes and other characteristics for over one billion stellar objects. During the past 50 years, 7,435 Schmidt camera plates were used to complete several sky surveys that make the data in USNO-B1.0 accurate to within 0.2 arcsec.
Applications
Apart from the fundamental function of providing astronomers with a reference frame to report their observations in, astrometry is also fundamental for fields like celestial mechanics, stellar dynamics and galactic astronomy. In observational astronomy, astrometric techniques help identify stellar objects by their unique motions. It is instrumental for keeping time, in that UTC is essentially the atomic time synchronized to Earth's rotation by means of exact astronomical observations. Astrometry is an important step in the cosmic distance ladder because it establishes parallax distance estimates for stars in the Milky Way.
Astrometry has also been used to support claims of extrasolar planet detection by measuring the displacement the proposed planets cause in their parent star's apparent position on the sky, due to their mutual orbit around the center of mass of the system. Astrometry is more accurate in space missions that are not affected by the distorting effects of the Earth's atmosphere. NASA's planned Space Interferometry Mission (SIM PlanetQuest) (now cancelled) was to utilize astrometric techniques to detect terrestrial planets orbiting 200 or so of the nearest solar-type stars. The European Space Agency's Gaia Mission, launched in 2013, applies astrometric techniques in its stellar census. In addition to the detection of exoplanets, it can also be used to determine their mass.
Astrometric measurements are used by astrophysicists to constrain certain models in celestial mechanics. By measuring the velocities of pulsars, it is possible to put a limit on the asymmetry of supernova explosions. Also, astrometric results are used to determine the distribution of dark matter in the galaxy.
Astronomers use astrometric techniques for the tracking of near-Earth objects. Astrometry is responsible for the detection of many record-breaking Solar System objects. To find such objects astrometrically, astronomers use telescopes to survey the sky and large-area cameras to take pictures at various determined intervals. By studying these images, they can detect Solar System objects by their movements relative to the background stars, which remain fixed. Once a movement per unit time is observed, astronomers compensate for the parallax caused by Earth's motion during this time and the heliocentric distance to this object is calculated. Using this distance and other photographs, more information about the object, including its orbital elements, can be obtained.
50000 Quaoar and 90377 Sedna are two Solar System objects discovered in this way by Michael E. Brown and others at Caltech using the Palomar Observatory's Samuel Oschin telescope of and the Palomar-Quest large-area CCD camera. The ability of astronomers to track the positions and movements of such celestial bodies is crucial to the understanding of the Solar System and its interrelated past, present, and future with others in the Universe.
Statistics
A fundamental aspect of astrometry is error correction. Various factors introduce errors into the measurement of stellar positions, including atmospheric conditions, imperfections in the instruments and errors by the observer or the measuring instruments. Many of these errors can be reduced by various techniques, such as through instrument improvements and compensations to the data. The results are then analyzed using statistical methods to compute data estimates and error ranges.
Computer programs
XParallax viu (Free application for Windows)
Astrometrica (Application for Windows)
Astrometry.net (Online blind astrometry)
See also
References
Further reading
External links
MPC Guide to Minor Body Astrometry
Astrometry Department of the U.S. Naval Observatory
USNO Astrometric Catalog and related Products
Planet-Like Body Discovered at Fringes of Our Solar System (2004-03-15)
Mike Brown's Caltech Home Page
Scientific Paper describing Sedna's discovery
The Hipparcos Space Astrometry Mission — on ESA
Astronomical sub-disciplines
Astrological aspects
Measurement
|
https://en.wikipedia.org/wiki/Acoustics
|
Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.
Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well accepted overview of the various fields in acoustics.
History
Etymology
The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω(akouo), "I hear".
The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively.
Early research in acoustics
In the 6th century BC, the ancient Greek philosopher Pythagoras wanted to know why some combinations of musical sounds seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers (e.g. 2 to 3, 3 to 4), the tones produced will be harmonious, and the smaller the integers the more harmonious the sounds. For example, a string of a certain length would sound particularly harmonious with a string of twice the length (other factors being equal). In modern parlance, if a string sounds the note C when plucked, a string twice as long will sound a C an octave lower. In one system of musical tuning, the tones in between are then given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, and 16:15 for B, in ascending order.
Aristotle (384–322 BC) understood that sound consisted of compressions and rarefactions of air which "falls upon and strikes the air which is next to it...", a very good expression of the nature of wave motion. On Things Heard, generally ascribed to Strato of Lampsacus, states that the pitch is related to the frequency of vibrations of the air and to the speed of sound.
In about 20 BC, the Roman architect and engineer Vitruvius wrote a treatise on the acoustic properties of theaters including discussion of interference, echoes, and reverberation—the beginnings of architectural acoustics. In Book V of his De architectura (The Ten Books of Architecture) Vitruvius describes sound as a wave comparable to a water wave extended to three dimensions, which, when interrupted by obstructions, would flow back and break up following waves. He described the ascending seats in ancient theaters as designed to prevent this deterioration of sound and also recommended bronze vessels of appropriate sizes be placed in theaters to resonate with the fourth, fifth and so on, up to the double octave, in order to resonate with the more desirable, harmonious notes.
During the Islamic golden age, Abū Rayhān al-Bīrūnī (973-1048) is believed to have postulated that the speed of sound was much slower than the speed of light.
The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution. Mainly Galileo Galilei (1564–1642) but also Marin Mersenne (1588–1648), independently, discovered the complete laws of vibrating strings (completing what Pythagoras and Pythagoreans had started 2000 years earlier). Galileo wrote "Waves are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of the speed of sound in air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne. Meanwhile, Newton (1642–1727) derived the relationship for wave velocity in solids, a cornerstone of physical acoustics (Principia, 1687).
Age of Enlightenment and onward
Substantial progress in acoustics, resting on firmer mathematical and physical concepts, was made during the eighteenth century by Euler (1707–1783), Lagrange (1736–1813), and d'Alembert (1717–1783). During this era, continuum physics, or field theory, began to receive a definite mathematical structure. The wave equation emerged in a number of contexts, including the propagation of sound in air.
In the nineteenth century the major figures of mathematical acoustics were Helmholtz in Germany, who consolidated the field of physiological acoustics, and Lord Rayleigh in England, who combined the previous knowledge with his own copious contributions to the field in his monumental work The Theory of Sound (1877). Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics.
The twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application was Sabine's groundbreaking work in architectural acoustics, and many others followed. Underwater acoustics was used for detecting submarines in the first World War. Sound recording and the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing. The ultrasonic frequency range enabled wholly new kinds of application in medicine and industry. New kinds of transducers (generators and receivers of acoustic energy) were invented and put to use.
Definition
Acoustics is defined by ANSI/ASA S1.1-2013 as "(a) Science of sound, including its production, transmission, and effects, including biological and psychological effects. (b) Those qualities of a room that, together, determine its character with respect to auditory effects."
The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations.
The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into sonic energy, producing a sound wave. There is one fundamental equation that describes sound wave propagation, the acoustic wave equation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.
The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse waves and surface waves.
Acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment. This interaction can be described as either a diffraction, interference or a reflection or a mix of the three. If several media are present, a refraction can also occur. Transduction processes are also of special importance to acoustics.
Fundamental concepts
Wave propagation: pressure levels
In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness of these disturbances is related to the sound pressure level (SPL) which is measured on a logarithmic scale in decibels.
Wave propagation: frequency
Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as "higher pitched" or "lower pitched" sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave bands or time frequency plots. Both of these popular methods are used to analyze sound and better understand the acoustic phenomenon.
The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allow better resolution in imaging technologies. Medical applications such as ultrasonography and elastography rely on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the infrasonic range. These frequencies can be used to study geological phenomena such as earthquakes.
Analytic instruments such as the spectrum analyzer facilitate visualization and measurement of acoustic signals and their properties. The spectrogram produced by such an instrument is a graphical display of the time varying pressure level and frequency profiles which give a specific acoustic signal its defining character.
Transduction in acoustics
A transducer is a device for converting one form of energy into another. In an electroacoustic context, this means converting sound energy into electrical energy (or vice versa). Electroacoustic transducers include loudspeakers, microphones, particle velocity sensors, hydrophones and sonar projectors. These devices convert a sound wave to or from an electric signal. The most widely used transduction principles are electromagnetism, electrostatics and piezoelectricity.
The transducers in most common loudspeakers (e.g. woofers and tweeters), are electromagnetic devices that generate waves using a suspended diaphragm driven by an electromagnetic voice coil, sending off pressure waves. Electret microphones and condenser microphones employ electrostatics—as the sound wave strikes the microphone's diaphragm, it moves and induces a voltage change. The ultrasonic systems used in medical ultrasonography employ piezoelectric transducers. These are made from special ceramics in which mechanical vibrations and electrical fields are interlinked through a property of the material itself.
Acoustician
An acoustician is an expert in the science of sound.
Education
There are many types of acoustician, but they usually have a Bachelor's degree or higher qualification. Some possess a degree in acoustics, while others enter the discipline via studies in fields such as physics or engineering. Much work in acoustics requires a good grounding in Mathematics and science. Many acoustic scientists work in research and development. Some conduct basic research to advance our knowledge of the perception (e.g. hearing, psychoacoustics or neurophysiology) of speech, music and noise. Other acoustic scientists advance understanding of how sound is affected as it moves through environments, e.g. underwater acoustics, architectural acoustics or structural acoustics. Other areas of work are listed under subdisciplines below. Acoustic scientists work in government, university and private industry laboratories. Many go on to work in Acoustical Engineering. Some positions, such as Faculty (academic staff) require a Doctor of Philosophy.
Subdisciplines
Archaeoacoustics
Archaeoacoustics, also known as the archaeology of sound, is one of the only ways to experience the past with senses other than our eyes. Archaeoacoustics is studied by testing the acoustic properties of prehistoric sites, including caves. Iegor Rezkinoff, a sound archaeologist, studies the acoustic properties of caves through natural sounds like humming and whistling. Archaeological theories of acoustics are focused around ritualistic purposes as well as a way of echolocation in the caves. In archaeology, acoustic sounds and rituals directly correlate as specific sounds were meant to bring ritual participants closer to a spiritual awakening. Parallels can also be drawn between cave wall paintings and the acoustic properties of the cave; they are both dynamic. Because archaeoacoustics is a fairly new archaeological subject, acoustic sound is still being tested in these prehistoric sites today.
Aeroacoustics
Aeroacoustics is the study of noise generated by air movement, for instance via turbulence, and the movement of sound through the fluid air. This knowledge is applied in acoustical engineering to study how to quieten aircraft. Aeroacoustics is important for understanding how wind musical instruments work.
Acoustic signal processing
Acoustic signal processing is the electronic manipulation of acoustic signals. Applications include: active noise control; design for hearing aids or cochlear implants; echo cancellation; music information retrieval, and perceptual coding (e.g. MP3 or Opus).
Architectural acoustics
Architectural acoustics (also known as building acoustics) involves the scientific understanding of how to achieve good sound within a building. It typically involves the study of speech intelligibility, speech privacy, music quality, and vibration reduction in the built environment. Commonly studied environments are hospitals, classrooms, dwellings, performance venues, recording and broadcasting studios. Focus considerations include room acoustics, airborne and impact transmission in building structures, airborne and structure-borne noise control, noise control of building systems and electroacoustic systems .
Bioacoustics
Bioacoustics is the scientific study of the hearing and calls of animal calls, as well as how animals are affected by the acoustic and sounds of their habitat.
Electroacoustics
This subdiscipline is concerned with the recording, manipulation and reproduction of audio using electronics. This might include products such as mobile phones, large scale public address systems or virtual reality systems in research laboratories.
Environmental noise and soundscapes
Environmental acoustics is concerned with noise and vibration caused by railways, road traffic, aircraft, industrial equipment and recreational activities. The main aim of these studies is to reduce levels of environmental noise and vibration. Research work now also has a focus on the positive use of sound in urban environments: soundscapes and tranquility.
Musical acoustics
Musical acoustics is the study of the physics of acoustic instruments; the audio signal processing used in electronic music; the computer analysis of music and composition, and the perception and cognitive neuroscience of music.
Noise
The goal this acoustics sub-discipline is to reduce the impact of unwanted sound. Scope of noise studies includes the generation, propagation, and impact on structures, objects, and people.
Innovative model development
Measurement techniques
Mitigation strategies
Input to the establishment of standards and regulations
Noise research investigates the impact of noise on humans and animals to include work in definitions, abatement, transportation noise, hearing protection, Jet and rocket noise, building system noise and vibration, atmospheric sound propagation, soundscapes, and low-frequency sound.
Psychoacoustics
Many studies have been conducted to identify the relationship between acoustics and cognition, or more commonly known as psychoacoustics, in which what one hears is a combination of perception and biological aspects. The information intercepted by the passage of sound waves through the ear is understood and interpreted through the brain, emphasizing the connection between the mind and acoustics. Psychological changes have been seen as brain waves slow down or speed up as a result of varying auditory stimulus which can in turn affect the way one thinks, feels, or even behaves. This correlation can be viewed in normal, everyday situations in which listening to an upbeat or uptempo song can cause one's foot to start tapping or a slower song can leave one feeling calm and serene. In a deeper biological look at the phenomenon of psychoacoustics, it was discovered that the central nervous system is activated by basic acoustical characteristics of music. By observing how the central nervous system, which includes the brain and spine, is influenced by acoustics, the pathway in which acoustic affects the mind, and essentially the body, is evident.
Speech
Acousticians study the production, processing and perception of speech. Speech recognition and Speech synthesis are two important areas of speech processing using computers. The subject also overlaps with the disciplines of physics, physiology, psychology, and linguistics.
Structural Vibration and Dynamics
Structural acoustics is the study of motions and interactions of mechanical systems with their environments and the methods of their measurement, analysis, and control . There are several sub-disciplines found within this regime:
Modal Analysis
Material characterization
Structural health monitoring
Acoustic Metamaterials
Friction Acoustics
Applications might include: ground vibrations from railways; vibration isolation to reduce vibration in operating theatres; studying how vibration can damage health (vibration white finger); vibration control to protect a building from earthquakes, or measuring how structure-borne sound moves through buildings.
Ultrasonics
Ultrasonics deals with sounds at frequencies too high to be heard by humans. Specialisms include medical ultrasonics (including medical ultrasonography), sonochemistry, ultrasonic testing, material characterisation and underwater acoustics (sonar).
Underwater acoustics
Underwater acoustics is the scientific study of natural and man-made sounds underwater. Applications include sonar to locate submarines, underwater communication by whales, climate change monitoring by measuring sea temperatures acoustically, sonic weapons, and marine bioacoustics.
Acoustic Conferences
InterNoise
NoiseCon
Forum Acousticum
SAE Noise and Vibration Conference and Exhibition
Professional societies
The Acoustical Society of America (ASA)
Australian Acoustical Society (AAS)
The European Acoustics Association (EAA)
Institute of Electrical and Electronics Engineers (IEEE)
Institute of Acoustics (IoA UK)
The Audio Engineering Society (AES)
American Society of Mechanical Engineers, Noise Control and Acoustics Division (ASME-NCAD)
International Commission for Acoustics (ICA)
American Institute of Aeronautics and Astronautics, Aeroacoustics (AIAA)
International Computer Music Association (ICMA)
Academic journals
Acoustics | An Open Access Journal from MDPI
Acoustics Today
Acta Acustica united with Acustica
Advances in Acoustics and Vibration
Applied Acoustics
Building Acoustics
IEEE Transacions on Ultrasonics, Ferroelectrics, and Frequency Control
Journal of the Acoustical Society of America (JASA)
Journal of the Acoustical Society of America, Express Letters (JASA-EL)
Journal of the Audio Engineering Society
Journal of Sound and Vibration (JSV)
Journal of Vibration and Acoustics American Society of Mechanical Engineers
MDPI Acoustics
Noise Control Engineering Journal
SAE International Journal of Vehicle Dynamics, Stability and NVH
Ultrasonics (journal)
Ultrasonics Sonochemistry
Wave Motion
See also
Outline of acoustics
Acoustic attenuation
Acoustic emission
Acoustic engineering
Acoustic impedance
Acoustic levitation
Acoustic location
Acoustic phonetics
Acoustic streaming
Acoustic tags
Acoustic thermometry
Acoustic wave
Audiology
Auditory illusion
Diffraction
Doppler effect
Fisheries acoustics
Friction acoustics
Helioseismology
Lamb wave
Linear elasticity
The Little Red Book of Acoustics (in the UK)
Longitudinal wave
Musicology
Music therapy
Noise pollution
One-Way Wave Equation
Phonon
Picosecond ultrasonics
Rayleigh wave
Shock wave
Seismology
Sonification
Sonochemistry
Soundproofing
Soundscape
Sonic boom
Sonoluminescence
Surface acoustic wave
Thermoacoustics
Transverse wave
Wave equation
References
Further reading
External links
International Commission for Acoustics
European Acoustics Association
Acoustical Society of America
Institute of Noise Control Engineers
National Council of Acoustical Consultants
Institute of Acoustic in UK
Australian Acoustical Society (AAS)
Sound
|
https://en.wikipedia.org/wiki/Applet
|
In computing, an applet is any small application that performs one specific task that runs within the scope of a dedicated widget engine or a larger program, often as a plug-in. The term is frequently used to refer to a Java applet, a program written in the Java programming language that is designed to be placed on a web page. Applets are typical examples of transient and auxiliary applications that do not monopolize the user's attention. Applets are not full-featured application programs, and are intended to be easily accessible.
History
The word applet was first used in 1990 in PC Magazine. However, the concept of an applet, or more broadly a small interpreted program downloaded and executed by the user, dates at least to RFC 5 (1969) by Jeff Rulifson, which described the Decode-Encode Language, which was designed to allow remote use of the oN-Line System over ARPANET, by downloading small programs to enhance the interaction. This has been specifically credited as a forerunner of Java's downloadable programs in RFC 2555.
Applet as an extension of other software
In some cases, an applet does not run independently. These applets must run either in a container provided by a host program, through a plugin, or a variety of other applications including mobile devices that support the applet programming model.
Web-based applets
Applets were used to provide interactive features to web applications that historically could not be provided by HTML alone. They could capture mouse input and also had controls like buttons or check boxes. In response to the user action, an applet could change the provided graphic content. This made applets well suited for demonstration, visualization, and teaching. There were online applet collections for studying various subjects, from physics to heart physiology. Applets were also used to create online game collections that allowed players to compete against live opponents in real-time.
An applet could also be a text area only, providing, for instance, a cross-platform command-line interface to some remote system. If needed, an applet could leave the dedicated area and run as a separate window. However, applets had very little control over web page content outside the applet dedicated area, so they were less useful for improving the site appearance in general (while applets like news tickers or WYSIWYG editors are also known). Applets could also play media in formats that are not natively supported by the browser.
HTML pages could embed parameters that were passed to the applet. Hence, the same applet could appear differently depending on the parameters that were passed.
Examples of Web-based applets include:
QuickTime movies
Flash movies
Windows Media Player applets, used to display embedded video files in Internet Explorer (and other browsers that supported the plugin)
3D modeling display applets, used to rotate and zoom a model
Browser games that were applet-based, though some developed into fully functional applications that required installation.
Applet Vs. Subroutine
A larger application distinguishes its applets through several features:
Applets execute only on the "client" platform environment of a system, as contrasted from "Servlet". As such, an applet provides functionality or performance beyond the default capabilities of its container (the browser).
The container restricts applets' capabilities.
Applets are written in a language different from the scripting or HTML language that invokes it. The applet is written in a compiled language, whereas the scripting language of the container is an interpreted language, hence the greater performance or functionality of the applet. Unlike a subroutine, a complete web component can be implemented as an applet.
Java applets
A Java applet is a Java program that is launched from HTML and run in a web browser. It takes code from server and run in a web browser. It can provide web applications with interactive features that cannot be provided by HTML. Since Java's bytecode is platform-independent, Java applets can be executed by browsers running under many platforms, including Windows, Unix, macOS, and Linux. When a Java technology-enabled web browser processes a page that contains an applet, the applet's code is transferred to the client's system and executed by the browser's Java virtual machine. An HTML page references an applet either via the deprecated tag or via its replacement, the tag.
Security
Recent developments in the coding of applications, including mobile and embedded systems, have led to the awareness of the security of applets.
Open platform applets
Applets in an open platform environment should provide secure interactions between different applications. A compositional approach can be used to provide security for open platform applets. Advanced compositional verification methods have been developed for secure applet interactions.
Java applets
A Java applet contains different security models: unsigned Java applet security, signed Java applet security, and self-signed Java applet security.
Web-based applets
In an applet-enabled web browser, many methods can be used to provide applet security for malicious applets. A malicious applet can infect a computer system in many ways, including denial of service, invasion of privacy, and annoyance. A typical solution for malicious applets is to make the web browser to monitor applets' activities. This will result in a web browser that will enable the manual or automatic stopping of malicious applets.
See also
Application posture
Bookmarklet
Java applet
Widget engine
Abstract Window Toolkit
References
External links
Technology neologisms
Component-based software engineering
Java (programming language) libraries
|
https://en.wikipedia.org/wiki/Area
|
Area is the measure of a region's size on a surface. The area of a plane region or plane area refers to the area of a shape or planar lamina, while surface area refers to the area of an open surface or the boundary of a three-dimensional object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat. It is the two-dimensional analogue of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept).
Two different regions may have the same area (as in squaring the circle); by synecdoche, "area" sometimes is used to refer to the region, as in a "polygonal area".
The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number.
There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus.
For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus.
Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable. In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions.
Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists.
Formal definition
An approach to defining what is meant by "area" is through axioms. "Area" can be defined as a function from a collection M of a special kinds of plane figures (termed measurable sets) to the set of real numbers, which satisfies the following properties:
For all S in M, .
If S and T are in M then so are and , and also .
If S and T are in M with then is in M and .
If a set S is in M and S is congruent to T then T is also in M and .
Every rectangle R is in M. If the rectangle has length h and breadth k then .
Let Q be a set enclosed between two step regions S and T. A step region is formed from a finite union of adjacent rectangles resting on a common base, i.e. . If there is a unique number c such that for all such step regions S and T, then .
It can be proved that such an area function actually exists.
Units
Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth. Algebraically, these units can be thought of as the squares of the corresponding length units.
The SI unit of area is the square metre, which is considered an SI derived unit.
Conversions
Calculation of the area of a square whose length and width are 1 metre would be:
1 metre × 1 metre = 1 m2
and so, a rectangle with different sides (say length of 3 metres and width of 2 metres) would have an area in square units that can be calculated as:
3 metres × 2 metres = 6 m2. This is equivalent to 6 million square millimetres. Other useful conversions are:
1 square kilometre = 1,000,000 square metres
1 square metre = 10,000 square centimetres = 1,000,000 square millimetres
1 square centimetre = 100 square millimetres.
Non-metric units
In non-metric units, the conversion between two square units is the square of the conversion between the corresponding length units.
1 foot = 12 inches,
the relationship between square feet and square inches is
1 square foot = 144 square inches,
where 144 = 122 = 12 × 12. Similarly:
1 square yard = 9 square feet
1 square mile = 3,097,600 square yards = 27,878,400 square feet
In addition, conversion factors include:
1 square inch = 6.4516 square centimetres
1 square foot = square metres
1 square yard = square metres
1 square mile = square kilometres
Other units including historical
There are several other common units for area. The are was the original unit of area in the metric system, with:
1 are = 100 square metres
Though the are has fallen out of use, the hectare is still commonly used to measure land:
1 hectare = 100 ares = 10,000 square metres = 0.01 square kilometres
Other uncommon metric units of area include the tetrad, the hectad, and the myriad.
The acre is also commonly used to measure land areas, where
1 acre = 4,840 square yards = 43,560 square feet.
An acre is approximately 40% of a hectare.
On the atomic scale, area is measured in units of barns, such that:
1 barn = 10−28 square meters.
The barn is commonly used in describing the cross-sectional area of interaction in nuclear physics.
In South Asia (mainly Indians), although the countries use SI units as official, many South Asians still use traditional units. Each administrative division has its own area unit, some of them have same names, but with different values. There's no official consensus about the traditional units values. Thus, the conversions between the SI units and the traditional units may have different results, depending on what reference that has been used.
Some traditional South Asian units that have fixed value:
1 Killa = 1 acre
1 Ghumaon = 1 acre
1 Kanal = 0.125 acre (1 acre = 8 kanal)
1 Decimal = 48.4 square yards
1 Chatak = 180 square feet
History
Circle area
In the 5th century BCE, Hippocrates of Chios was the first to show that the area of a disk (the region enclosed by a circle) is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Eudoxus of Cnidus, also in the 5th century BCE, also found that the area of a disk is proportional to its radius squared.
Subsequently, Book I of Euclid's Elements dealt with equality of areas between two-dimensional figures. The mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, in his book Measurement of a Circle. (The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area r2 for the disk.) Archimedes approximated the value of π (and hence the area of a unit-radius circle) with his doubling method, in which he inscribed a regular triangle in a circle and noted its area, then doubled the number of sides to give a regular hexagon, then repeatedly doubled the number of sides as the polygon's area got closer and closer to that of the circle (and did the same with circumscribed polygons).
Triangle area
Quadrilateral area
In the 7th century CE, Brahmagupta developed a formula, now known as Brahmagupta's formula, for the area of a cyclic quadrilateral (a quadrilateral inscribed in a circle) in terms of its sides. In 1842, the German mathematicians Carl Anton Bretschneider and Karl Georg Christian von Staudt independently found a formula, known as Bretschneider's formula, for the area of any quadrilateral.
General polygon area
The development of Cartesian coordinates by René Descartes in the 17th century allowed the development of the surveyor's formula for the area of any polygon with known vertex locations by Gauss in the 19th century.
Areas determined using calculus
The development of integral calculus in the late 17th century provided tools that could subsequently be used for computing more complicated areas, such as the area of an ellipse and the surface areas of various curved three-dimensional objects.
Area formulas
Polygon formulas
For a non-self-intersecting (simple) polygon, the Cartesian coordinates (i=0, 1, ..., n-1) of whose n vertices are known, the area is given by the surveyor's formula:
where when i=n-1, then i+1 is expressed as modulus n and so refers to 0.
Rectangles
The most basic area formula is the formula for the area of a rectangle. Given a rectangle with length and width , the formula for the area is:
(rectangle).
That is, the area of the rectangle is the length multiplied by the width. As a special case, as in the case of a square, the area of a square with side length is given by the formula:
(square).
The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers.
Dissection, parallelograms, and triangles
Most other simple formulas for area follow from the method of dissection.
This involves cutting a shape into pieces, whose areas must sum to the area of the original shape.
For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle:
(parallelogram).
However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram:
(triangle).
Similar arguments can be used to find area formulas for the trapezoid as well as more complicated polygons.
Area of curved shapes
Circles
The formula for the area of a circle (more properly called the area enclosed by a circle or the area of a disk) is based on a similar method. Given a circle of radius , it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form an approximate parallelogram. The height of this parallelogram is , and the width is half the circumference of the circle, or . Thus, the total area of the circle is :
(circle).
Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly , which is the area of the circle.
This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral:
Ellipses
The formula for the area enclosed by an ellipse is related to the formula of a circle; for an ellipse with semi-major and semi-minor axes and the formula is:
Non-planar surface area
Most basic formulas for surface area can be obtained by cutting surfaces and flattening them out (see: developable surfaces). For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed.
The formula for the surface area of a sphere is more difficult to derive: because a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work On the Sphere and Cylinder. The formula is:
(sphere),
where is the radius of the sphere. As with the formula for the area of a circle, any derivation of this formula inherently uses methods similar to calculus.
General formulas
Areas of 2-dimensional figures
A triangle: (where B is any side, and h is the distance from the line on which B lies to the other vertex of the triangle). This formula can be used if the height h is known. If the lengths of the three sides are known then Heron's formula can be used: where a, b, c are the sides of the triangle, and is half of its perimeter. If an angle and its two included sides are given, the area is where is the given angle and and are its included sides. If the triangle is graphed on a coordinate plane, a matrix can be used and is simplified to the absolute value of . This formula is also known as the shoelace formula and is an easy way to solve for the area of a coordinate triangle by substituting the 3 points (x1,y1), (x2,y2), and (x3,y3). The shoelace formula can also be used to find the areas of other polygons when their vertices are known. Another approach for a coordinate triangle is to use calculus to find the area.
A simple polygon constructed on a grid of equal-distanced points (i.e., points with integer coordinates) such that all the polygon's vertices are grid points: , where i is the number of grid points inside the polygon and b is the number of boundary points. This result is known as Pick's theorem.
Area in calculus
The area between a positive-valued curve and the horizontal axis, measured between two values a and b (b is defined as the larger of the two values) on the horizontal axis, is given by the integral from a to b of the function that represents the curve:
The area between the graphs of two functions is equal to the integral of one function, f(x), minus the integral of the other function, g(x):
where is the curve with the greater y-value.
An area bounded by a function expressed in polar coordinates is:
The area enclosed by a parametric curve with endpoints is given by the line integrals:
or the z-component of
(For details, see .) This is the principle of the planimeter mechanical device.
Bounded area between two quadratic functions
To find the bounded area between two quadratic functions, we subtract one from the other to write the difference as
where f(x) is the quadratic upper bound and g(x) is the quadratic lower bound. Define the discriminant of f(x)-g(x) as
By simplifying the integral formula between the graphs of two functions (as given in the section above) and using Vieta's formula, we can obtain
The above remains valid if one of the bounding functions is linear instead of quadratic.
Surface area of 3-dimensional figures
Cone: , where r is the radius of the circular base, and h is the height. That can also be rewritten as or where r is the radius and l is the slant height of the cone. is the base area while is the lateral surface area of the cone.
Cube: , where s is the length of an edge.
Cylinder: , where r is the radius of a base and h is the height. The can also be rewritten as , where d is the diameter.
Prism: , where B is the area of a base, P is the perimeter of a base, and h is the height of the prism.
pyramid: , where B is the area of the base, P is the perimeter of the base, and L is the length of the slant.
Rectangular prism: , where is the length, w is the width, and h is the height.
General formula for surface area
The general formula for the surface area of the graph of a continuously differentiable function where and is a region in the xy-plane with the smooth boundary:
An even more general formula for the area of the graph of a parametric surface in the vector form where is a continuously differentiable vector function of is:
List of formulas
The above calculations show how to find the areas of many common shapes.
The areas of irregular (and thus arbitrary) polygons can be calculated using the "Surveyor's formula" (shoelace formula).
Relation of area to perimeter
The isoperimetric inequality states that, for a closed curve of length L (so the region it encloses has perimeter L) and for area A of the region that it encloses,
and equality holds if and only if the curve is a circle. Thus a circle has the largest area of any closed figure with a given perimeter.
At the other extreme, a figure with given perimeter L could have an arbitrarily small area, as illustrated by a rhombus that is "tipped over" arbitrarily far so that two of its angles are arbitrarily close to 0° and the other two are arbitrarily close to 180°.
For a circle, the ratio of the area to the circumference (the term for the perimeter of a circle) equals half the radius r. This can be seen from the area formula πr2 and the circumference formula 2πr.
The area of a regular polygon is half its perimeter times the apothem (where the apothem is the distance from the center to the nearest point on any side).
Fractals
Doubling the edge lengths of a polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the dimension of the space the polygon resides in). But if the one-dimensional lengths of a fractal drawn in two dimensions are all doubled, the spatial content of the fractal scales by a power of two that is not necessarily an integer. This power is called the fractal dimension of the fractal.
Area bisectors
There are an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle.
Any line through the midpoint of a parallelogram bisects the area.
All area bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area. In the case of a circle they are the diameters of the circle.
Optimization
Given a wire contour, the surface of least area spanning ("filling") it is a minimal surface. Familiar examples include soap bubbles.
The question of the filling area of the Riemannian circle remains open.
The circle has the largest area of any two-dimensional object having the same perimeter.
A cyclic polygon (one inscribed in a circle) has the largest area of any polygon with a given number of sides of the same lengths.
A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral.
The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral.
The ratio of the area of the incircle to the area of an equilateral triangle, , is larger than that of any non-equilateral triangle.
The ratio of the area to the square of the perimeter of an equilateral triangle, is larger than that for any other triangle.
See also
Brahmagupta quadrilateral, a cyclic quadrilateral with integer sides, integer diagonals, and integer area.
Equiareal map
Heronian triangle, a triangle with integer sides and integer area.
List of triangle inequalities
One-seventh area triangle, an inner triangle with one-seventh the area of the reference triangle.
Routh's theorem, a generalization of the one-seventh area triangle.
Orders of magnitude—A list of areas by size.
Derivation of the formula of a pentagon
Planimeter, an instrument for measuring small areas, e.g. on maps.
Area of a convex quadrilateral
Robbins pentagon, a cyclic pentagon whose side lengths and area are all rational numbers.
References
External links
|
https://en.wikipedia.org/wiki/Anisotropy
|
Anisotropy () is the structural property of non-uniformity in different directions, as opposed to isotropy. An anisotropic object or pattern has properties that differ according to direction of measurement. For example, many materials exhibit very different properties when measured along different axes: physical or mechanical properties (absorbance, refractive index, conductivity, tensile strength, etc.).
An example of anisotropy is light coming through a polarizer. Another is wood, which is easier to split along its grain than across it because of the directional non-uniformity of the grain (the grain is the same in one direction, not all directions).
Fields of interest
Computer graphics
In the field of computer graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal, as is the case with velvet.
Anisotropic filtering (AF) is a method of enhancing the image quality of textures on surfaces that are far away and steeply angled with respect to the point of view. Older techniques, such as bilinear and trilinear filtering, do not take into account the angle a surface is viewed from, which can result in aliasing or blurring of textures. By reducing detail in one direction more than another, these effects can be reduced easily.
Chemistry
A chemical anisotropic filter, as used to filter particles, is a filter with increasingly smaller interstitial spaces in the direction of filtration so that the proximal regions filter out larger particles and distal regions increasingly remove smaller particles, resulting in greater flow-through and more efficient filtration.
In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g., to determine the shape of a macromolecule. Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon.
In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene. This abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change.
Real-world imagery
Images of a gravity-bound or man-made environment are particularly anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity (vertical and horizontal).
Physics
Physicists from University of California, Berkeley reported about their detection of the cosmic anisotropy in cosmic microwave background radiation in 1977. Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, the source of the radiation. Cosmic anisotropy has also been seen in the alignment of galaxies' rotation axes and polarization angles of quasars.
Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may also show "filamentation" (such as that seen in lightning or a plasma globe) that is directional.
An anisotropic liquid has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids.
Some materials conduct heat in a way that is isotropic, that is independent of spatial orientation around the heat source. Heat conduction is more commonly anisotropic, which implies that detailed geometric modeling of typically diverse materials being thermally managed is required. The materials used to transfer and reject heat from the heat source in electronics are often anisotropic.
Many crystals are anisotropic to light ("optical anisotropy"), and exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An "axis of anisotropy" is defined as the axis along which isotropy is broken (or an axis of symmetry, such as normal to crystalline layers). Some materials can have multiple such optical axes.
Geophysics and geology
Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength (e.g., crystals, cracks, pores, layers, or inclusions) have a dominant alignment. This alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth; significant seismic anisotropy has been detected in the Earth's crust, mantle, and inner core.
Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; electrical conductivity in one direction (e.g. parallel to a layer), is different from that in another (e.g. perpendicular to a layer). This property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity (low conductivity), whereas shales have lower resistivity. Formation evaluation instruments measure this conductivity or resistivity, and the results are used to help find oil and gas in wells. The mechanical anisotropy measured for some of the sedimentary rocks like coal and shale can change with corresponding changes in their surface properties like sorption when gases are produced from the coal and shale reservoirs.
The hydraulic conductivity of aquifers is often anisotropic for the same reason. When calculating groundwater flow to drains or to wells, the difference between horizontal and vertical permeability must be taken into account; otherwise the results may be subject to error.
Most common rock-forming minerals are anisotropic, including quartz and feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet.
Medical acoustics
Anisotropy is also a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, when the angle of the transducer is changed. Tendon fibers appear hyperechoic (bright) when the transducer is perpendicular to the tendon, but can appear hypoechoic (darker) when the transducer is angled obliquely. This can be a source of interpretation error for inexperienced practitioners.
Materials science and engineering
Anisotropy, in materials science, is a material's directional dependence of a physical property. This is a critical consideration for materials selection in engineering applications. A material with physical properties that are symmetric about an axis that is normal to a plane of isotropy is called a transversely isotropic material. Tensor descriptions of material properties can be used to determine the directional dependence of that property. For a monocrystalline material, anisotropy is associated with the crystal symmetry in the sense that more symmetric crystal types have fewer independent coefficients in the tensor description of a given property. When a material is polycrystalline, the directional dependence on properties is often related to the processing techniques it has undergone. A material with randomly oriented grains will be isotropic, whereas materials with texture will be often be anisotropic. Textured materials are often the result of processing techniques like cold rolling, wire drawing, and heat treatment.
Mechanical properties of materials such as Young's modulus, ductility, yield strength, and high-temperature creep rate, are often dependent on the direction of measurement. Fourth-rank tensor properties, like the elastic constants, are anisotropic, even for materials with cubic symmetry. The Young's modulus relates stress and strain when an isotropic material is elastically deformed; to describe elasticity in an anisotropic material, stiffness (or compliance) tensors are used instead.
In metals, anisotropic elasticity behavior is present in all single crystals with three independent coefficients for cubic crystals, for example. For face-centered cubic materials such as nickel and copper, the stiffness is highest along the <111> direction, normal to the close-packed planes, and smallest parallel to <100>. Tungsten is so nearly isotropic at room temperature that it can be considered to have only two stiffness coefficients; aluminium is another metal that is nearly isotropic.
For an isotropic material, where is the shear modulus, is the Young's modulus, and is the material's Poisson's ratio. Therefore, for cubic materials, we can think of anisotropy, , as the ratio between the empirically determined shear modulus for the cubic material and its (isotropic) equivalent:
The latter expression is known as the Zener ratio, , where refers to elastic constants in Voigt (vector-matrix) notation. For an isotropic material, the ratio is one.
Limitation of the Zener ratio to cubic materials is waived in the Tensorial anisotropy index AT that takes into consideration all the 27 components of the fully anisotropic stiffness tensor. It is composed of two major parts and , the former referring to components existing in cubic tensor and the latter in anisotropic tensor so that This first component includes the modified Zener ratio and additionally accounts for directional differences in the material, which exist in orthotropic material, for instance. The second component of this index covers the influence of stiffness coefficients that are nonzero only for non-cubic materials and remains zero otherwise.
Fiber-reinforced or layered composite materials exhibit anisotropic mechanical properties, due to orientation of the reinforcement material. In many fiber-reinforced composites like carbon fiber or glass fiber based composites, the weave of the material (e.g. unidirectional or plain weave) can determine the extent of the anisotropy of the bulk material. The tunability of orientation of the fibers allows for application-based designs of composite materials, depending on the direction of stresses applied onto the material.
Amorphous materials such as glass and polymers are typically isotropic. Due to the highly randomized orientation of macromolecules in polymeric materials, polymers are in general described as isotropic. However, mechanically gradient polymers can be engineered to have directionally dependent properties through processing techniques or introduction of anisotropy-inducing elements. Researchers have built composite materials with aligned fibers and voids to generate anisotropic hydrogels, in order to mimic hierarchically ordered biological soft matter. 3D printing, especially Fused Deposition Modeling, can introduce anisotropy into printed parts. This is due to the fact that FDM is designed to extrude and print layers of thermoplastic materials. This creates materials that are strong when tensile stress is applied in parallel to the layers and weak when the material is perpendicular to the layers.
Microfabrication
Anisotropic etching techniques (such as deep reactive-ion etching) are used in microfabrication processes to create well defined microscopic features with a high aspect ratio. These features are commonly used in MEMS (microelectromechanical systems) and microfluidic devices, where the anisotropy of the features is needed to impart desired optical, electrical, or physical properties to the device. Anisotropic etching can also refer to certain chemical etchants used to etch a certain material preferentially over certain crystallographic planes (e.g., KOH etching of silicon [100] produces pyramid-like structures)
Neuroscience
Diffusion tensor imaging is an MRI technique that involves measuring the fractional anisotropy of the random motion (Brownian motion) of water molecules in the brain. Water molecules located in fiber tracts are more likely to move anisotropically, since they are restricted in their movement (they move more in the dimension parallel to the fiber tract rather than in the two dimensions orthogonal to it), whereas water molecules dispersed in the rest of the brain have less restricted movement and therefore display more isotropy. This difference in fractional anisotropy is exploited to create a map of the fiber tracts in the brains of the individual.
Remote sensing and radiative transfer modeling
Radiance fields (see Bidirectional reflectance distribution function (BRDF)) from a reflective surface are often not isotropic in nature. This makes calculations of the total energy being reflected from any scene a difficult quantity to calculate. In remote sensing applications, anisotropy functions can be derived for specific scenes, immensely simplifying the calculation of the net reflectance or (thereby) the net irradiance of a scene.
For example, let the BRDF be where 'i' denotes incident direction and 'v' denotes viewing direction (as if from a satellite or other instrument). And let P be the Planar Albedo, which represents the total reflectance from the scene.
It is of interest because, with knowledge of the anisotropy function as defined, a measurement of the BRDF from a single viewing direction (say, ) yields a measure of the total scene reflectance (planar albedo) for that specific incident geometry (say, ).
See also
Circular symmetry
References
External links
"Overview of Anisotropy"
DoITPoMS Teaching and Learning Package: "Introduction to Anisotropy"
"Gauge, and knitted fabric generally, is an anisotropic phenomenon"
Orientation (geometry)
Asymmetry
|
https://en.wikipedia.org/wiki/Antimatter
|
In modern physics, antimatter is defined as matter composed of the antiparticles (or "partners") of the corresponding particles in "ordinary" matter, and can be thought of as matter with reversed charge, parity, and time, known as CPT reversal. Antimatter occurs in natural processes like cosmic ray collisions and some types of radioactive decay, but only a tiny fraction of these have successfully been bound together in experiments to form antiatoms. Minuscule numbers of antiparticles can be generated at particle accelerators; however, total artificial production has been only a few nanograms. No macroscopic amount of antimatter has ever been assembled due to the extreme cost and difficulty of production and handling. Nonetheless, antimatter is an essential component of widely-available applications related to beta decay, such as positron emission tomography, radiation therapy, and industrial imaging.
In theory, a particle and its antiparticle (for example, a proton and an antiproton) have the same mass, but opposite electric charge, and other differences in quantum numbers.
A collision between any particle and its anti-particle partner leads to their mutual annihilation, giving rise to various proportions of intense photons (gamma rays), neutrinos, and sometimes less-massive particleantiparticle pairs. The majority of the total energy of annihilation emerges in the form of ionizing radiation. If surrounding matter is present, the energy content of this radiation will be absorbed and converted into other forms of energy, such as heat or light. The amount of energy released is usually proportional to the total mass of the collided matter and antimatter, in accordance with the notable mass–energy equivalence equation, .
Antiparticles bind with each other to form antimatter, just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton (the antiparticle of the proton) can form an antihydrogen atom. The nuclei of antihelium have been artificially produced, albeit with difficulty, and are the most complex anti-nuclei so far observed. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. The process by which this inequality between matter and antimatter particles developed is called baryogenesis.
Definitions
Antimatter particles carry the same charge as matter particles, but of opposite sign. That is, an antiproton is negatively charged and an antielectron (positron) is positively charged. Neutrons do not carry a net charge, but their constituent quarks do. Protons and neutrons have a baryon number of +1, while antiprotons and antineutrons have a baryon number of –1. Similarly, electrons have a lepton number of +1, while that of positrons is –1. When a particle and its corresponding antiparticle collide, they are both converted into energy.
The French term contra-terrene led to the initialism "C.T." and the science fiction term "seetee", as used in such novels as Seetee Ship.
Conceptual history
The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of "squirts" and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into.
The term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898, in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity.
The modern theory of antimatter began in 1928, with a paper by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. Although Dirac had laid the groundwork for the existence of these “antielectrons” he initially failed to pick up on the implications contained within his own equation. He freely gave the credit for that insight to J. Robert Oppenheimer, whose seminal paper “On the Theory of Electrons and Protons” (Feb 14th 1930) drew on Dirac’s equation and argued for the existence of a positively charged electron (a positron), which as a counterpart to the electron should have the same mass as the electron itself. This meant that it could not be, as Dirac had in fact suggested, a proton. Dirac further postulated the existence of antimatter in a 1931 paper which referred to the positron as an "anti-electron". These were discovered by Carl D. Anderson in 1932 and named positrons from "positive electron". Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc. A complete periodic table of antimatter was envisaged by Charles Janet in 1929.
The Feynman–Stueckelberg interpretation states that antimatter and antiparticles are regular particles traveling backward in time.
Notation
One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as and , respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of quarks, so an antiproton must therefore be formed from antiquarks. Another convention is to distinguish particles by positive and negative electric charge. Thus, the electron and positron are denoted simply as and respectively. To prevent confusion, however, the two conventions are never mixed.
Properties
Theorized anti-gravitational properties of antimatter are currently being tested at the AEGIS and ALPHA-g experiments at CERN. Research is needed to study the possible gravitational effects between matter and antimatter, and between antimatter and antimatter. However, research is difficult considering when the two meet they annihilate, along with the current difficulties of capturing and containing antimatter.
There are compelling theoretical reasons to believe that, aside from the fact that antiparticles have different signs on all charges (such as electric and baryon charges), matter and antimatter have exactly the same properties. This means a particle and its corresponding antiparticle must have identical masses and decay lifetimes (if unstable). It also implies that, for example, a star made up of antimatter (an "antistar") will shine just like an ordinary star. This idea was tested experimentally in 2016 by the ALPHA experiment, which measured the transition between the two lowest energy states of antihydrogen. The results, which are identical to that of hydrogen, confirmed the validity of quantum mechanics for antimatter.
On 27 September 2023, physicists reported studies which support the notion that antimatter particles behave in a similar way as normal matter in a gravitational field.
Origin and asymmetry
Most matter observable from the Earth seems to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable.
Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays striking Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (that is, the rest mass of an electron multiplied by c2).
Observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant antimatter cloud surrounding the Galactic Center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the Galactic Center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains kinetic energy while falling into a stellar remnant.
Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish. NASA is trying to determine if such galaxies exist by looking for X-ray and gamma ray signatures of annihilation events in colliding superclusters.
In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter.
Antimatter quantum interferometry has been first demonstrated in 2018 in the Positron Laboratory (L-NESS) of Rafael Ferragut in Como (Italy), by a group led by Marco Giammarchi.
Natural production
Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In January 2011, research by the American Astronomical Society discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in terrestrial gamma ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.
Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). It is hypothesized that during the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, is called baryon asymmetry. The exact mechanism that produced this asymmetry during baryogenesis remains an unsolved problem. One of the necessary conditions for this asymmetry is the violation of CP symmetry, which has been experimentally observed in the weak interaction.
Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma via the jets.
Observation in cosmic rays
Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. This antimatter cannot all have been created in the Big Bang, but is instead attributed to have been produced by cyclic processes at high energies. For instance, electron-positron pairs may be formed in pulsars, as a magnetized neutron star rotation cycle shears electron-positron pairs from the star surface. Therein the antimatter forms a wind that crashes upon the ejecta of the progenitor supernovae. This weathering takes place as "the cold, magnetized relativistic wind launched by the star hits the non-relativistically expanding ejecta, a shock wave system forms in the impact: the outer one propagates in the ejecta, while a reverse shock propagates back towards the star." The former ejection of matter in the outer shock wave and the latter production of antimatter in the reverse shock wave are steps in a space weather cycle.
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.
There is an ongoing search for larger antimatter nuclei, such as antihelium nuclei (that is, anti-alpha particles), in cosmic rays. The detection of natural antihelium could imply the existence of large antimatter structures such as an antistar. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio. AMS-02 revealed in December 2016 that it had discovered a few signals consistent with antihelium nuclei amidst several billion helium nuclei. The result remains to be verified, and the team is currently trying to rule out contamination.
Artificial production
Positrons
Positrons were reported in November 2008 to have been generated by Lawrence Livermore National Laboratory in larger numbers than by any previous synthetic process. A laser drove electrons through a gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; newer simulations showed that short bursts of ultra-intense lasers and millimeter-thick gold are a far more effective source.
Antiprotons, antineutrons, and antinuclei
The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. An antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues.
In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN. At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory.
Antihydrogen atoms
In 1995, CERN announced that it had successfully brought into existence nine hot antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri. Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP.
In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from to – still too "hot" to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen. The ATRAP project released similar results very shortly thereafter. The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning–Malmberg trap. The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning–Malmberg trap, which is about or 0.1% of the original amount.
The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than . While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator. This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion.
In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The ultimate goal of this endeavour is to test CPT symmetry through comparison of the atomic spectra of hydrogen and antihydrogen (see hydrogen spectral series).
Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields. Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second. This was the first time that neutral antimatter had been trapped.
On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before. ALPHA has used these trapped atoms to initiate research into the spectral properties of the antihydrogen.
In 2016, a new antiproton decelerator and cooler called ELENA (Extra Low ENergy Antiproton decelerator) was built. It takes the antiprotons from the antiproton decelerator and cools them to 90 keV, which is "cold" enough to study. This machine works by using high energy and accelerating the particles within the chamber. More than one hundred antiprotons can be captured per second, a huge improvement, but it would still take several thousand years to make a nanogram of antimatter.
The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute. Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately atoms of anti-hydrogen). However, CERN only produces 1% of the anti-matter Fermilab does, and neither are designed to produce anti-matter. According to Gerald Jackson, using technology already in use today we are capable of producing and capturing 20 grams of anti-matter particles per year at a yearly cost of 670 million dollars per facility.
Antihelium
Antihelium-3 nuclei () were first observed in the 1970s in proton–nucleus collision experiments at the Institute for High Energy Physics by Y. Prockoshkin's group (Protvino near Moscow, USSR) and later created in nucleus–nucleus collision experiments. Nucleus–nucleus collisions produce antinuclei through the coalescence of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of artificially created antihelium-4 nuclei (anti-alpha particles) () from such collisions.
The Alpha Magnetic Spectrometer on the International Space Station has, as of 2021, recorded eight events that seem to indicate the detection of antihelium-3.
Preservation
Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam.
In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes. The record for storing antiparticles is currently held by the TRAP experiment at CERN: antiprotons were kept in a Penning trap for 405 days. A proposal was made in 2018 to develop containment technology advanced enough to contain a billion anti-protons in a portable device to be driven to another lab for further experimentation.
Cost
Scientists claim that antimatter is the costliest material to make. In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen. This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators) and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions). In comparison, to produce the first atomic weapon, the cost of the Manhattan Project was estimated at $23 billion with inflation during 2007.
Several studies funded by the NASA Institute for Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately, the belts of gas giants, like Jupiter, hopefully at a lower cost per gram.
Uses
Medical
Matter–antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use. Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy.
Fuel
Isolated and stored antimatter could be used as a fuel for interplanetary or interstellar travel as part of an antimatter-catalyzed nuclear pulse propulsion or another antimatter rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft.
If matter–antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass () is about 10 orders of magnitude greater than chemical energies, and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about per fission reaction or ), and about 2 orders of magnitude greater than the best possible results expected from fusion (about for the proton–proton chain). The reaction of of antimatter with of matter would produce (180 petajoules) of energy (by the mass–energy equivalence formula, ), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomba, the largest thermonuclear weapon ever detonated.
Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron–positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a lifetime of 85 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a lifetime of 26 nanoseconds) and can be deflected magnetically to produce thrust.
Charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about ).
Weapons
Antimatter has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible. Nonetheless, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself.
See also
References
Further reading
External links
Freeview Video 'Antimatter' by the Vega Science Trust and the BBC/OU
CERN Webcasts (RealPlayer required)
What is Antimatter? (from the Frequently Asked Questions at the Center for Antimatter–Matter Studies)
FAQ from CERN with information about antimatter aimed at the general reader, posted in response to antimatter's fictional portrayal in Angels & Demons
Antimatter at Angels and Demons, CERN
What is direct CP-violation?
Animated illustration of antihydrogen production at CERN from the Exploratorium.
Quantum field theory
Fictional power sources
Articles containing video clips
|
https://en.wikipedia.org/wiki/Antiparticle
|
In particle physics, every type of particle is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.
Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle.
Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate.
Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle, which can occur in particle accelerators such as the Large Hadron Collider at CERN.
Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z0 bosons, mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct.
History
Experiment
In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.
The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.
Dirac hole theory
Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons However, these "negative-energy electrons" turned out to be positrons, and not protons.
This picture implied an infinite negative charge for the universea problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction + → + , where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.
Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time.
Particle–antiparticle annihilation
If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as + → (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, + → , cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.
Properties
Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation , parity and time reversal .
and are linear, unitary operators, is antilinear and antiunitary,
.
If denotes the quantum state of a particle with momentum and spin whose component in the z-direction is , then one has
where denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin.
If , and
can be defined separately on the particles and antiparticles, then
where the proportionality sign indicates that there might be a phase on the right hand side.
As anticommutes with the charges, , particle and antiparticle have opposite electric charges q and -q.
Quantum field theory
This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory.
One may try to quantize an electron field without mixing the annihilation and creation operators by writing
where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian
then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.
So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations
where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form
where the first sum is over positive energy states and the second over those of negative energy. The energy becomes
where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., and . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.
This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.
Feynman–Stueckelberg interpretation
By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stueckelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today.
Since this picture was first developed by Stueckelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stueckelberg interpretation of antiparticles to honor both scientists.
See also
List of particles
Antimatter
Gravitational interaction of antimatter
Parity, charge conjugation, and time reversal symmetry
CP violations
Quantum field theory
Baryogenesis, baryon asymmetry, and Leptogenesis
One-electron universe
Paul Dirac
Notes
References
External links
Antimatter at CERN
Subatomic particles
Quantum field theory
Antimatter
Particle physics
|
https://en.wikipedia.org/wiki/Anchor
|
An anchor is a device, normally made of metal, used to secure a vessel to the bed of a body of water to prevent the craft from drifting due to wind or current. The word derives from Latin , which itself comes from the Greek ().
Anchors can either be temporary or permanent. Permanent anchors are used in the creation of a mooring, and are rarely moved; a specialist service is normally needed to move or maintain them. Vessels carry one or more temporary anchors, which may be of different designs and weights.
A sea anchor is a drag device, not in contact with the seabed, used to minimise drift of a vessel relative to the water. A drogue is a drag device used to slow or help steer a vessel running before a storm in a following or overtaking sea, or when crossing a bar in a breaking sea.
Overview
Anchors achieve holding power either by "hooking" into the seabed, or weight, or a combination of the two. Permanent moorings use large masses (commonly a block or slab of concrete) resting on the seabed. Semi-permanent mooring anchors (such as mushroom anchors) and large ship's anchors derive a significant portion of their holding power from their weight, while also hooking or embedding in the bottom. Modern anchors for smaller vessels have metal flukes that hook on to rocks on the bottom or bury themselves in soft seabed.
The vessel is attached to the anchor by the rode (also called a cable or a warp). It can be made of rope, chain or a combination of rope and chain. The ratio of the length of rode to the water depth is known as the scope (see below).
Holding ground
Holding ground is the area of sea floor that holds an anchor, and thus the attached ship or boat. Different types of anchor are designed to hold in different types of holding ground. Some bottom materials hold better than others; for instance, hard sand holds well, shell holds poorly. Holding ground may be fouled with obstacles. An anchorage location may be chosen for its holding ground. In poor holding ground, only the weight of an anchor matters; in good holding ground, it is able to dig in, and the holding power can be significantly higher.
History
Evolution of the anchor
The earliest anchors were probably rocks, and many rock anchors have been found dating from at least the Bronze Age. Pre-European Maori waka (canoes) used one or more hollowed stones, tied with flax ropes, as anchors. Many modern moorings still rely on a large rock as the primary element of their design. However, using pure weight to resist the forces of a storm works well only as a permanent mooring; a large enough rock would be nearly impossible to move to a new location.
The ancient Greeks used baskets of stones, large sacks filled with sand, and wooden logs filled with lead. According to Apollonius Rhodius and Stephen of Byzantium, anchors were formed of stone, and Athenaeus states that they were also sometimes made of wood. Such anchors held the vessel merely by their weight and by their friction along the bottom.
Fluked anchors
Iron was afterwards introduced for the construction of anchors, and an improvement was made by forming them with teeth, or "flukes", to fasten themselves into the bottom. This is the iconic anchor shape most familiar to non-sailors.
This form has been used since antiquity. The Roman Nemi ships of the 1st century AD used this form. The Viking Ladby ship (probably 10th century) used a fluked anchor of this type, made of iron, which would have had a wooden stock mounted perpendicular to the shank and flukes to make the flukes contact the bottom at a suitable angle to hook or penetrate.
Admiralty anchor
The Admiralty Pattern anchor, or simply "Admiralty", also known as a "Fisherman", consists of a central shank with a ring or shackle for attaching the rode (the rope, chain, or cable connecting the ship and the anchor). At the other end of the shank there are two arms, carrying the flukes, while the stock is mounted to the shackle end, at ninety degrees to the arms. When the anchor lands on the bottom, it generally falls over with the arms parallel to the seabed. As a strain comes onto the rope, the stock digs into the bottom, canting the anchor until one of the flukes catches and digs into the bottom.
The Admiralty Anchor is an entirely independent reinvention of a classical design, as seen in one of the Nemi ship anchors. This basic design remained unchanged for centuries, with the most significant changes being to the overall proportions, and a move from stocks made of wood to iron stocks in the late 1830s and early 1840s.
Since one fluke always protrudes up from the set anchor, there is a great tendency of the rode to foul the anchor as the vessel swings due to wind or current shifts. When this happens, the anchor may be pulled out of the bottom, and in some cases may need to be hauled up to be re-set. In the mid-19th century, numerous modifications were attempted to alleviate these problems, as well as improve holding power, including one-armed mooring anchors. The most successful of these patent anchors, the Trotman Anchor, introduced a pivot at the centre of the crown where the arms join the shank, allowing the "idle" upper arm to fold against the shank. When deployed the lower arm may fold against the shank tilting the tip of the fluke upwards, so each fluke has a tripping palm at its base, to hook on the bottom as the folded arm drags along the seabed, which unfolds the downward oriented arm until the tip of the fluke can engage the bottom.
Handling and storage of these anchors requires special equipment and procedures. Once the anchor is hauled up to the hawsepipe, the ring end is hoisted up to the end of a timber projecting from the bow known as the cathead. The crown of the anchor is then hauled up with a heavy tackle until one fluke can be hooked over the rail. This is known as "catting and fishing" the anchor. Before dropping the anchor, the fishing process is reversed, and the anchor is dropped from the end of the cathead.
Stockless anchor
The stockless anchor, patented in England in 1821, represented the first significant departure in anchor design in centuries. Although their holding-power-to-weight ratio is significantly lower than admiralty pattern anchors, their ease of handling and stowage aboard large ships led to almost universal adoption. In contrast to the elaborate stowage procedures for earlier anchors, stockless anchors are simply hauled up until they rest with the shank inside the hawsepipes, and the flukes against the hull (or inside a recess in the hull).
While there are numerous variations, stockless anchors consist of a set of heavy flukes connected by a pivot or ball and socket joint to a shank. Cast into the crown of the anchor is a set of tripping palms, projections that drag on the bottom, forcing the main flukes to dig in.
Small boat anchors
Until the mid-20th century, anchors for smaller vessels were either scaled-down versions of admiralty anchors, or simple grapnels. As new designs with greater holding-power-to-weight ratios were sought, a great variety of anchor designs has emerged. Many of these designs are still under patent, and other types are best known by their original trademarked names.
Grapnel anchor / drag
A traditional design, the grapnel is merely a shank (no stock) with four or more tines, also known as a drag. It has a benefit in that, no matter how it reaches the bottom, one or more tines are aimed to set. In coral, or rock, it is often able to set quickly by hooking into the structure, but may be more difficult to retrieve. A grapnel is often quite light, and may have additional uses as a tool to recover gear lost overboard. Its weight also makes it relatively easy to move and carry, however its shape is generally not compact and it may be awkward to stow unless a collapsing model is used.
Grapnels rarely have enough fluke area to develop much hold in sand, clay, or mud. It is not unknown for the anchor to foul on its own rode, or to foul the tines with refuse from the bottom, preventing it from digging in. On the other hand, it is quite possible for this anchor to find such a good hook that, without a trip line from the crown, it is impossible to retrieve.
Herreshoff anchor
Designed by yacht designer L. Francis Herreshoff, this is essentially the same pattern as an admiralty anchor, albeit with small diamond-shaped flukes or palms. The novelty of the design lay in the means by which it could be broken down into three pieces for stowage. In use, it still presents all the issues of the admiralty pattern anchor.
Northill anchor
Originally designed as a lightweight anchor for seaplanes, this design consists of two plough-like blades mounted to a shank, with a folding stock crossing through the crown of the anchor.
CQR plough anchor
Many manufacturers produce a plough-type anchor, so-named after its resemblance to an agricultural plough. All such anchors are copied from the original CQR (Coastal Quick Release, or Clyde Quick Release, later rebranded as 'secure' by Lewmar), a 1933 design patented in the UK by mathematician Geoffrey Ingram Taylor.
Plough anchors stow conveniently in a roller at the bow, and have been popular with cruising sailors and private boaters. Ploughs can be moderately good in all types of seafloor, though not exceptional in any. Contrary to popular belief, the CQR's hinged shank is not to allow the anchor to turn with direction changes rather than breaking out, but actually to prevent the shank's weight from disrupting the fluke's orientation while setting. The hinge can wear out and may trap a sailor's fingers. Some later plough anchors have a rigid shank, such as the Lewmar's "Delta".
A plough anchor has a fundamental flaw: like its namesake, the agricultural plough, it digs in but then tends to break out back to the surface. Plough anchors sometimes have difficulty setting at all, and instead skip across the seafloor. By contrast, modern efficient anchors tend to be "scoop" types that dig ever deeper.
Delta anchor
The Delta anchor was derived from the CQR. It was patented by Philip McCarron, James Stewart, and Gordon Lyall of British marine manufacturer Simpson-Lawrence Ltd in 1992. It was designed as an advance over the anchors used for floating systems such as oil rigs. It retains the weighted tip of the CQR but has a much higher fluke area to weight ratio than its predecessor. The designers also eliminated the sometimes troublesome hinge. It is a plough anchor with a rigid, arched shank. It is described as self-launching because it can be dropped from a bow roller simply by paying out the rode, without manual assistance. This is an oft copied design with the European Brake and Australian Sarca Excel being two of the more notable ones. Although it is a plough type anchor, it sets and holds reasonably well in hard bottoms.
Danforth anchor
American Richard Danforth invented the Danforth Anchor in the 1940s for use aboard landing craft. It uses a stock at the crown to which two large flat triangular flukes are attached. The stock is hinged so the flukes can orient toward the bottom (and on some designs may be adjusted for an optimal angle depending on the bottom type). Tripping palms at the crown act to tip the flukes into the seabed. The design is a burying variety, and once well set can develop high resistance. Its lightweight and compact flat design make it easy to retrieve and relatively easy to store; some anchor rollers and hawsepipes can accommodate a fluke-style anchor.
A Danforth does not usually penetrate or hold in gravel or weeds. In boulders and coral it may hold by acting as a hook. If there is much current, or if the vessel is moving while dropping the anchor, it may "kite" or "skate" over the bottom due to the large fluke area acting as a sail or wing.
The FOB HP anchor designed in Brittany in the 1970s is a Danforth variant designed to give increased holding through its use of rounded flukes setting at a 30° angle.
The Fortress is an American aluminum alloy Danforth variant that can be disassembled for storage and it features an adjustable 32° and 45° shank/fluke angle to improve holding capability in common sea bottoms such as hard sand and soft mud. This anchor performed well in a 1989 US Naval Sea Systems Command (NAVSEA) test and in an August 2014 holding power test that was conducted in the soft mud bottoms of the Chesapeake Bay.
Bruce or claw anchor
This claw-shaped anchor was designed by Peter Bruce from Scotland in the 1970s. Bruce gained his early reputation from the production of large-scale commercial anchors for ships and fixed installations such as oil rigs. It was later scaled down for small boats, and copies of this popular design abound. The Bruce and its copies, known generically as "claw type anchors", have been adopted on smaller boats (partly because they stow easily on a bow roller) but they are most effective in larger sizes. Claw anchors are quite popular on charter fleets as they have a high chance to set on the first try in many bottoms. They have the reputation of not breaking out with tide or wind changes, instead slowly turning in the bottom to align with the force.
Bruce anchors can have difficulty penetrating weedy bottoms and grass. They offer a fairly low holding-power-to-weight ratio and generally have to be oversized to compete with newer types.
Scoop type anchors
Three time circumnavigator German Rolf Kaczirek invented the Bügel Anker in the 1980s. Kaczirek wanted an anchor that was self-righting without necessitating a ballasted tip. Instead, he added a roll bar and switched out the plough share for a flat blade design. As none of the innovations of this anchor were patented, copies of it abound.
Alain Poiraud of France introduced the scoop type anchor in 1996. Similar in design to the Bügel anchor, Poiraud's design features a concave fluke shaped like the blade of a shovel, with a shank attached parallel to the fluke, and the load applied toward the digging end. It is designed to dig into the bottom like a shovel, and dig deeper as more pressure is applied. The common challenge with all the scoop type anchors is that they set so well, they can be difficult to weigh.
Bügelanker, or Wasi: This German-designed bow anchor has a sharp tip for penetrating weed, and features a roll-bar that allows the correct setting attitude to be achieved without the need for extra weight to be inserted into the tip.
Spade: This is a French design that has proven successful since 1996. It features a demountable shank (hollow in some instances) and the choice of galvanized steel, stainless steel, or aluminium construction, which means a lighter and more easily stowable anchor. The geometry also makes this anchor self stowing on a single roller.
Rocna: This New Zealand spade design, available in galvanised or stainless steel, has been produced since 2004. It has a roll-bar (similar to that of the Bügel), a large spade-like fluke area, and a sharp toe for penetrating weed and grass. The Rocna sets quickly and holds well.
Mantus: This is claimed to be a fast setting anchor with high holding power. It is designed as an all round anchor capable of setting even in challenging bottoms such as hard sand/clay bottoms and grass. The shank is made out of a high tensile steel capable of withstanding high loads. It is similar in design to the Rocna but has a larger and wider roll-bar that reduces the risk of fouling and increases the angle of the fluke that results in improved penetration in some bottoms.
Ultra: This is an innovative spade design that dispenses with a roll-bar. Made primarily of stainless steel, its main arm is hollow, while the fluke tip has lead within it. It is similar in appearance to the Spade anchor.
Vulcan: A recent sibling to the Rocna, this anchor performs similarly but does not have a roll-bar. Instead the Vulcan has patented design features such as the "V-bulb" and the "Roll Palm" that allow it to dig in deeply. The Vulcan was designed primarily for sailors who had difficulties accommodating the roll-bar Rocna on their bow. Peter Smith (originator of the Rocna) designed it specifically for larger powerboats. Both Vulcans and Rocnas are available in galvanised steel, or in stainless steel. The Vulcan is similar in appearance to the Spade anchor.
Knox Anchor: This is produced in Scotland and was invented by Professor John Knox. It has a divided concave large area fluke arrangement and a shank in high tensile steel. A roll bar similar to the Rocna gives fast setting and a holding power of about 40 times anchor weight.
Other temporary anchors
Mud weight: Consists of a blunt heavy weight, usually cast iron or cast lead, that sinks into the mud and resist lateral movement. It is suitable only for soft silt bottoms and in mild conditions. Sizes range between 5 and 20 kg for small craft. Various designs exist and many are home produced from lead or improvised with heavy objects. This is a commonly used method on the Norfolk Broads in England.
Bulwagga: This is a unique design featuring three flukes instead of the usual two. It has performed well in tests by independent sources such as American boating magazine Practical Sailor.
Permanent anchors
These are used where the vessel is permanently or semi-permanently sited, for example in the case of lightvessels or channel marker buoys. The anchor needs to hold the vessel in all weathers, including the most severe storm, but needs to be lifted only occasionally, at most – for example, only if the vessel is to be towed into port for maintenance. An alternative to using an anchor under these circumstances, especially if the anchor need never be lifted at all, may be to use a pile that is driven into the seabed.
Permanent anchors come in a wide range of types and have no standard form. A slab of rock with an iron staple in it to attach a chain to would serve the purpose, as would any dense object of appropriate weight (for instance, an engine block). Modern moorings may be anchored by augers, which look and act like oversized screws drilled into the seabed, or by barbed metal beams pounded in (or even driven in with explosives) like pilings, or by a variety of other non-mass means of getting a grip on the bottom. One method of building a mooring is to use three or more conventional anchors laid out with short lengths of chain attached to a swivel, so no matter which direction the vessel moves, one or more anchors are aligned to resist the force.
Mushroom
The mushroom anchor is suitable where the seabed is composed of silt or fine sand. It was invented by Robert Stevenson, for use by an 82-ton converted fishing boat, Pharos, which was used as a lightvessel between 1807 and 1810 near to Bell Rock whilst the lighthouse was being constructed. It was equipped with a 1.5-ton example.
It is shaped like an inverted mushroom, the head becoming buried in the silt. A counterweight is often provided at the other end of the shank to lay it down before it becomes buried.
A mushroom anchor normally sinks in the silt to the point where it has displaced its own weight in bottom material, thus greatly increasing its holding power. These anchors are suitable only for a silt or mud bottom, since they rely upon suction and cohesion of the bottom material, which rocky or coarse sand bottoms lack. The holding power of this anchor is at best about twice its weight until it becomes buried, when it can be as much as ten times its weight. They are available in sizes from about 5 kg up to several tons.
Deadweight
A deadweight is an anchor that relies solely on being a heavy weight. It is usually just a large block of concrete or stone at the end of the chain. Its holding power is defined by its weight underwater (i.e., taking its buoyancy into account) regardless of the type of seabed, although suction can increase this if it becomes buried. Consequently, deadweight anchors are used where mushroom anchors are unsuitable, for example in rock, gravel or coarse sand. An advantage of a deadweight anchor over a mushroom is that if it does drag, it continues to provide its original holding force. The disadvantage of using deadweight anchors in conditions where a mushroom anchor could be used is that it needs to be around ten times the weight of the equivalent mushroom anchor.
Auger
Auger anchors can be used to anchor permanent moorings, floating docks, fish farms, etc. These anchors, which have one or more slightly pitched self-drilling threads, must be screwed into the seabed with the use of a tool, so require access to the bottom, either at low tide or by use of a diver. Hence they can be difficult to install in deep water without special equipment.
Weight for weight, augers have a higher holding than other permanent designs, and so can be cheap and relatively easily installed, although difficult to set in extremely soft mud.
High-holding-types
There is a need in the oil-and-gas industry to resist large anchoring forces when laying pipelines and for drilling vessels. These anchors are installed and removed using a support tug and pennant/pendant wire. Some examples are the Stevin range supplied by Vrijhof Ankers. Large plate anchors such as the Stevmanta are used for permanent moorings.
Anchoring gear
The elements of anchoring gear include the anchor, the cable (also called a rode), the method of attaching the two together, the method of attaching the cable to the ship, charts, and a method of learning the depth of the water.
Vessels may carry a number of anchors: bower anchors are the main anchors used by a vessel and normally carried at the bow of the vessel. A kedge anchor is a light anchor used for warping an anchor, also known as kedging, or more commonly on yachts for mooring quickly or in benign conditions. A stream anchor, which is usually heavier than a kedge anchor, can be used for kedging or warping in addition to temporary mooring and restraining stern movement in tidal conditions or in waters where vessel movement needs to be restricted, such as rivers and channels.
Charts are vital to good anchoring. Knowing the location of potential dangers, as well as being useful in estimating the effects of weather and tide in the anchorage, is essential in choosing a good place to drop the hook. One can get by without referring to charts, but they are an important tool and a part of good anchoring gear, and a skilled mariner would not choose to anchor without them.
Anchor rode
The anchor rode (or "cable" or "warp") that connects the anchor to the vessel is usually made up of chain, rope, or a combination of those. Large ships use only chain rode. Smaller craft might use a rope/chain combination or an all chain rode. All rodes should have some chain; chain is heavy but it resists abrasion from coral, sharp rocks, or shellfish beds, whereas a rope warp is susceptible to abrasion and can fail in a short time when stretched against an abrasive surface. The weight of the chain also helps keep the direction of pull on the anchor closer to horizontal, which improves holding, and absorbs part of snubbing loads. Where weight is not an issue, a heavier chain provides better holding by forming a catenary curve through the water and resting as much of its length on the bottom as would not be lifted by tension of the mooring load. Any changes to the tension are accommodated by additional chain being lifted or settling on the bottom, and this absorbs shock loads until the chain is straight, at which point the full load is taken by the anchor. Additional dissipation of shock loads can be achieved by fitting a snubber between the chain and a bollard or cleat on deck. This also reduces shock loads on the deck fittings, and the vessel usually lies more comfortably and quietly.
Being strong and elastic, nylon rope is the most suitable as an anchor rode. Polyester (terylene) is stronger but less elastic than nylon. Both materials sink, so they avoid fouling other craft in crowded anchorages and do not absorb much water. Neither breaks down quickly in sunlight. Elasticity helps absorb shock loading, but causes faster abrasive wear when the rope stretches over an abrasive surface, like a coral bottom or a poorly designed chock. Polypropylene ("polyprop") is not suited to rodes because it floats and is much weaker than nylon, being barely stronger than natural fibres. Some grades of polypropylene break down in sunlight and become hard, weak, and unpleasant to handle. Natural fibres such as manila or hemp are still used in developing nations but absorb a lot of water, are relatively weak, and rot, although they do give good handling grip and are often relatively cheap. Ropes that have little or no elasticity are not suitable as anchor rodes. Elasticity is partly a function of the fibre material and partly of the rope structure.
All anchors should have chain at least equal to the boat's length. Some skippers prefer an all chain warp for greater security on coral or sharp edged rock bottoms. The chain should be shackled to the warp through a steel eye or spliced to the chain using a chain splice. The shackle pin should be securely wired or moused. Either galvanized or stainless steel is suitable for eyes and shackles, galvanised steel being the stronger of the two. Some skippers prefer to add a swivel to the rode. There is a school of thought that says these should not be connected to the anchor itself, but should be somewhere in the chain. However, most skippers connect the swivel directly to the anchor.
Scope
Scope is the ratio of length of the rode to the depth of the water measured from the highest point (usually the anchor roller or bow chock) to the seabed, making allowance for the highest expected tide. The function of this ratio is to ensure that the pull on the anchor is unlikely to break it out of the bottom if it is embedded, or lift it off a hard bottom, either of which is likely to result in the anchor dragging. A large scope induces a load that is nearly horizontal.
In moderate conditions the ratio of rode to water depth should be 4:1 – where there is sufficient swing-room, a greater scope is always better. In rougher conditions it should be up to twice this with the extra length giving more stretch and a smaller angle to the bottom to resist the anchor breaking out. For example, if the water is deep, and the anchor roller is above the water, then the 'depth' is 9 meters (~30 feet). The amount of rode to let our in moderate conditions is thus 36 meters (120 feet). (For this reason it is important to have a reliable and accurate method of measuring the depth of water.)
When using a rope rode, there is a simple way to estimate the scope: The ratio of bow height of the rode to length of rode above the water while lying back hard on the anchor is the same or less than the scope ratio. The basis for this is simple geometry (Intercept Theorem): The ratio between two sides of a triangle stays the same regardless of the size of the triangle as long as the angles do not change.
Generally, the rode should be between 5 and 10 times the depth to the seabed, giving a scope of 5:1 or 10:1; the larger the number, the shallower the angle is between the cable and the seafloor, and the less upwards force is acting on the anchor. A 10:1 scope gives the greatest holding power, but also allows for much more drifting due to the longer amount of cable paid out. Anchoring with sufficient scope and/or heavy chain rode brings the direction of strain close to parallel with the seabed. This is particularly important for light, modern anchors designed to bury in the bottom, where scopes of 5:1 to 7:1 are common, whereas heavy anchors and moorings can use a scope of 3:1, or less. Some modern anchors, such as the Ultra holds with a scope of 3:1; but, unless the anchorage is crowded, a longer scope always reduces shock stresses.
Anchoring techniques
The basic anchoring consists of determining the location, dropping the anchor, laying out the scope, setting the hook, and assessing where the vessel ends up. The ship seeks a location that is sufficiently protected; has suitable holding ground, enough depth at low tide and enough room for the boat to swing.
The location to drop the anchor should be approached from down wind or down current, whichever is stronger. As the chosen spot is approached, the vessel should be stopped or even beginning to drift back. The anchor should initially be lowered quickly but under control until it is on the bottom (see anchor windlass). The vessel should continue to drift back, and the cable should be veered out under control (slowly) so it is relatively straight.
Once the desired scope is laid out, the vessel should be gently forced astern, usually using the auxiliary motor but possibly by backing a sail. A hand on the anchor line may telegraph a series of jerks and jolts, indicating the anchor is dragging, or a smooth tension indicative of digging in. As the anchor begins to dig in and resist backward force, the engine may be throttled up to get a thorough set. If the anchor continues to drag, or sets after having dragged too far, it should be retrieved and moved back to the desired position (or another location chosen.)
There are techniques of anchoring to limit the swing of a vessel if the anchorage has limited room:
Using an anchor weight, kellet or sentinel
Lowering a concentrated, heavy weight down the anchor line – rope or chain – directly in front of the bow to the seabed behaves like a heavy chain rode and lowers the angle of pull on the anchor. If the weight is suspended off the seabed it acts as a spring or shock absorber to dampen the sudden actions that are normally transmitted to the anchor and can cause it to dislodge and drag. In light conditions, a kellet reduces the swing of the vessel considerably. In heavier conditions these effects disappear as the rode becomes straightened and the weight ineffective. Known as an "anchor chum weight" or "angel" in the UK.
Forked moor
Using two anchors set approximately 45° apart, or wider angles up to 90°, from the bow is a strong mooring for facing into strong winds. To set anchors in this way, first one anchor is set in the normal fashion. Then, taking in on the first cable as the boat is motored into the wind and letting slack while drifting back, a second anchor is set approximately a half-scope away from the first on a line perpendicular to the wind. After this second anchor is set, the scope on the first is taken up until the vessel is lying between the two anchors and the load is taken equally on each cable.
This moor also to some degree limits the range of a vessel's swing to a narrower oval. Care should be taken that other vessels do not swing down on the boat due to the limited swing range.
Bow and stern
(Not to be mistaken with the Bahamian moor, below.) In the bow and stern technique, an anchor is set off each the bow and the stern, which can severely limit a vessel's swing range and also align it to steady wind, current or wave conditions. One method of accomplishing this moor is to set a bow anchor normally, then drop back to the limit of the bow cable (or to double the desired scope, e.g. 8:1 if the eventual scope should be 4:1, 10:1 if the eventual scope should be 5:1, etc.) to lower a stern anchor. By taking up on the bow cable the stern anchor can be set. After both anchors are set, tension is taken up on both cables to limit the swing or to align the vessel.
Bahamian moor
Similar to the above, a Bahamian moor is used to sharply limit the swing range of a vessel, but allows it to swing to a current. One of the primary characteristics of this technique is the use of a swivel as follows: the first anchor is set normally, and the vessel drops back to the limit of anchor cable. A second anchor is attached to the end of the anchor cable, and is dropped and set. A swivel is attached to the middle of the anchor cable, and the vessel connected to that.
The vessel now swings in the middle of two anchors, which is acceptable in strong reversing currents, but a wind perpendicular to the current may break out the anchors, as they are not aligned for this load.
Backing an anchor
Also known as tandem anchoring, in this technique two anchors are deployed in line with each other, on the same rode. With the foremost anchor reducing the load on the aft-most, this technique can develop great holding power and may be appropriate in "ultimate storm" circumstances. It does not limit swinging range, and might not be suitable in some circumstances. There are complications, and the technique requires careful preparation and a level of skill and experience above that required for a single anchor.
Kedging
Kedging or warping is a technique for moving or turning a ship by using a relatively light anchor.
In yachts, a kedge anchor is an anchor carried in addition to the main, or bower anchors, and usually stowed aft. Every yacht should carry at least two anchors – the main or bower anchor and a second lighter kedge anchor. It is used occasionally when it is necessary to limit the turning circle as the yacht swings when it is anchored, such as in a narrow river or a deep pool in an otherwise shallow area. Kedge anchors are sometimes used to recover vessels that have run aground.
For ships, a kedge may be dropped while a ship is underway, or carried out in a suitable direction by a tender or ship's boat to enable the ship to be winched off if aground or swung into a particular heading, or even to be held steady against a tidal or other stream.
Historically, it was of particular relevance to sailing warships that used them to outmaneuver opponents when the wind had dropped but might be used by any vessel in confined, shoal water to place it in a more desirable position, provided she had enough manpower .
Club hauling
Club hauling is an archaic technique. When a vessel is in a narrow channel or on a lee shore so that there is no room to tack the vessel in a conventional manner, an anchor attached to the lee quarter may be dropped from the lee bow. This is deployed when the vessel is head to wind and has lost headway. As the vessel gathers sternway the strain on the cable pivots the vessel around what is now the weather quarter turning the vessel onto the other tack. The anchor is then normally cut away (the ship's momentum prevents recovery without aborting the maneuver).
Weighing anchor
Since all anchors that embed themselves in the bottom require the strain to be along the seabed, anchors can be broken out of the bottom by shortening the rope until the vessel is directly above the anchor; at this point the anchor chain is "up and down", in naval parlance. If necessary, motoring slowly around the location of the anchor also helps dislodge it. Anchors are sometimes fitted with a trip line attached to the crown, by which they can be unhooked from rocks, coral, chain, or other underwater hazards.
The term aweigh describes an anchor when it is hanging on the rope and is not resting on the bottom. This is linked to the term to weigh anchor, meaning to lift the anchor from the sea bed, allowing the ship or boat to move. An anchor is described as aweigh when it has been broken out of the bottom and is being hauled up to be stowed. Aweigh should not be confused with under way, which describes a vessel that is not moored to a dock or anchored, whether or not the vessel is moving through the water. Aweigh is also often confused with away, which is incorrect.
Anchor as symbol
An anchor frequently appears on the flags and coats of arms of institutions involved with the sea, both naval and commercial, as well as of port cities and seacoast regions and provinces in various countries. There also exists in heraldry the "Anchored Cross", or Mariner's Cross, a stylized cross in the shape of an anchor. The symbol can be used to signify 'fresh start' or 'hope'. The New Testament refers to the Christian's hope as "an anchor of the soul". The Mariner's Cross is also referred to as St. Clement's Cross, in reference to the way this saint was killed (being tied to an anchor and thrown from a boat into the Black Sea in 102). Anchored crosses are occasionally a feature of coats of arms in which context they are referred to by the heraldic terms anchry or ancre.
In 1887, the Delta Gamma Fraternity adopted the anchor as its badge to signify hope.
The Unicode anchor (Miscellaneous Symbols) is represented by: ⚓.
See also
"Anchors Aweigh", United States Navy marching song
References
Bibliography
Blackwell, Alex & Daria; Happy Hooking – the Art of Anchoring, 2008, 2011, 2019 White Seahorse;
Edwards, Fred; Sailing as a Second Language: An illustrated dictionary, 1988 Highmark Publishing;
Hinz, Earl R.; The Complete Book of Anchoring and Mooring, Rev. 2d ed., 1986, 1994, 2001 Cornell Maritime Press;
Hiscock, Eric C.; Cruising Under Sail, second edition, 1965 Oxford University Press;
Pardey, Lin and Larry; The Capable Cruiser; 1995 Pardey Books/Paradise Cay Publications;
Rousmaniere, John; The Annapolis Book of Seamanship, 1983, 1989 Simon and Schuster;
Smith, Everrett; Cruising World's Guide to Seamanship: Hold me tight, 1992 New York Times Sports/Leisure Magazines
Further reading
William N. Brady (1864). The Kedge-anchor; Or, Young Sailors' Assistant.
First published as The Naval Apprentice's Kedge Anchor. New York, Taylor and Clement, 1841.--The Kedge-anchor; 3rd ed. New York, 1848.--6th ed. New York, 1852.--9th ed. New York, 1857.
External links
Anchor Tests: Soft Sand Over Hard Sand—Practical-Sailor
The Big Anchor Project
Anchor comparison
Heraldic charges
Nautical terminology
Sailboat components
Sailing ship components
Ship anchors
Watercraft components
Weights
|
https://en.wikipedia.org/wiki/Ammonia
|
Ammonia is an inorganic chemical compound of nitrogen and hydrogen with the formula . A stable binary hydride and the simplest pnictogen hydride, ammonia is a colourless gas with a distinct pungent smell. Biologically, it is a common nitrogenous waste, and it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to fertilisers. Around 70% of ammonia produced industrially is used to make fertilisers in various forms and composition, such as urea and diammonium phosphate. Ammonia in pure form is also applied directly into the soil.
Ammonia, either directly or indirectly, is also a building block for the synthesis of many pharmaceutical products and is used in many commercial cleaning products. It is mainly collected by downward displacement of both air and water.
Although common in nature—both terrestrially and in the outer planets of the Solar System—and in wide use, ammonia is both caustic and hazardous in its concentrated form. In many countries it is classified as an extremely hazardous substance, and is subject to strict reporting requirements by facilities that produce, store, or use it in significant quantities.
The global industrial production of ammonia in 2021 was 235 million tonnes. Industrial ammonia is sold either as ammonia liquor (usually 28% ammonia in water) or as pressurised or refrigerated anhydrous liquid ammonia transported in tank cars or cylinders.
For fundamental reasons, the production of ammonia from the elements hydrogen and nitrogen is difficult, requiring high pressures and high temperatures. The Haber process that enabled industrial production was invented at the beginning of the 20th century, revolutionizing agriculture.
boils at at a pressure of one atmosphere, so the liquid must be stored under pressure or at low temperature. Household ammonia or ammonium hydroxide is a solution of in water. The concentration of such solutions is measured in units of the Baumé scale (density), with 26 degrees Baumé (about 30% of ammonia by weight at ) being the typical high-concentration commercial product.
Etymology
Pliny, in Book XXXI of his Natural History, refers to a salt named hammoniacum, so called because of its proximity to the nearby Temple of Jupiter Amun (Greek Ἄμμων Ammon) in the Roman province of Cyrenaica. However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name.
Natural occurrence
Ammonia is a chemical found in trace quantities on Earth, being produced from nitrogenous animal and vegetable matter. Ammonia and ammonium salts are also found in small quantities in rainwater, whereas ammonium chloride (sal ammoniac), and ammonium sulfate are found in volcanic districts. Crystals of ammonium bicarbonate have been found in Patagonia guano.
Ammonia is also found throughout the Solar System on Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, among other places: on smaller, icy bodies such as Pluto, ammonia can act as a geologically important antifreeze, as a mixture of water and ammonia can have a melting point as low as if the ammonia concentration is high enough and thus allow such bodies to retain internal oceans and active geology at a far lower temperature than would be possible with water alone. Substances containing ammonia, or those that are similar to it, are called ammoniacal.
Properties
Ammonia is a colourless gas with a characteristically pungent smell. It is lighter than air, its density being 0.589 times that of air. It is easily liquefied due to the strong hydrogen bonding between molecules. Gaseous ammonia turns to a colourless liquid, which boils at , and freezes to colourless crystals at . Little data is available at very high temperatures and pressures, such as supercritical conditions.
Solid
The crystal symmetry is cubic, Pearson symbol cP16, space group P213 No.198, lattice constant 0.5125 nm.
Liquid
Liquid ammonia possesses strong ionising powers reflecting its high ε of 22. Liquid ammonia has a very high standard enthalpy change of vapourization (23.35 kJ/mol; for comparison, water's is 40.65 kJ/mol, methane 8.19 kJ/mol and phosphine 14.6 kJ/mol) and can therefore be used in laboratories in uninsulated vessels without additional refrigeration. See liquid ammonia as a solvent.
Solvent properties
Ammonia readily dissolves in water. In an aqueous solution, it can be expelled by boiling. The aqueous solution of ammonia is basic. The maximum concentration of ammonia in water (a saturated solution) has a density of 0.880 g/cm3 and is often known as '.880 ammonia'.
Table of thermal and physical properties of saturated liquid ammonia:
Table of thermal and physical properties of ammonia () at atmospheric pressure:
Decomposition
At high temperature and in the presence of a suitable catalyst or in a pressurised vessel with constant volume and high temperature (e.g. ), ammonia is decomposed into its constituent elements. Decomposition of ammonia is a slightly endothermic process requiring 23 kJ/mol (5.5 kcal/mol) of ammonia, and yields hydrogen and nitrogen gas. Ammonia can also be used as a source of hydrogen for acid fuel cells if the unreacted ammonia can be removed. Ruthenium and platinum catalysts were found to be the most active, whereas supported Ni catalysts were less active.
Structure
The ammonia molecule has a trigonal pyramidal shape as predicted by the valence shell electron pair repulsion theory (VSEPR theory) with an experimentally determined bond angle of 106.7°. The central nitrogen atom has five outer electrons with an additional electron from each hydrogen atom. This gives a total of eight electrons, or four electron pairs that are arranged tetrahedrally. Three of these electron pairs are used as bond pairs, which leaves one lone pair of electrons. The lone pair repels more strongly than bond pairs; therefore, the bond angle is not 109.5°, as expected for a regular tetrahedral arrangement, but 106.8°. This shape gives the molecule a dipole moment and makes it polar. The molecule's polarity, and especially its ability to form hydrogen bonds, makes ammonia highly miscible with water. The lone pair makes ammonia a base, a proton acceptor. Ammonia is moderately basic; a 1.0 M aqueous solution has a pH of 11.6, and if a strong acid is added to such a solution until the solution is neutral (), 99.4% of the ammonia molecules are protonated. Temperature and salinity also affect the proportion of ammonium . The latter has the shape of a regular tetrahedron and is isoelectronic with methane.
The ammonia molecule readily undergoes nitrogen inversion at room temperature; a useful analogy is an umbrella turning itself inside out in a strong wind. The energy barrier to this inversion is 24.7 kJ/mol, and the resonance frequency is 23.79 GHz, corresponding to microwave radiation of a wavelength of 1.260 cm. The absorption at this frequency was the first microwave spectrum to be observed and was used in the first maser.
Amphotericity
One of the most characteristic properties of ammonia is its basicity. Ammonia is considered to be a weak base. It combines with acids to form ammonium salts; thus, with hydrochloric acid it forms ammonium chloride (sal ammoniac); with nitric acid, ammonium nitrate, etc. Perfectly dry ammonia gas will not combine with perfectly dry hydrogen chloride gas; moisture is necessary to bring about the reaction.
As a demonstration experiment under air with ambient moisture, opened bottles of concentrated ammonia and hydrochloric acid solutions produce a cloud of ammonium chloride, which seems to appear 'out of nothing' as the salt aerosol forms where the two diffusing clouds of reagents meet between the two bottles.
The salts produced by the action of ammonia on acids are known as the ammonium salts and all contain the ammonium ion ().
Although ammonia is well known as a weak base, it can also act as an extremely weak acid. It is a protic substance and is capable of formation of amides (which contain the ion). For example, lithium dissolves in liquid ammonia to give a blue solution (solvated electron) of lithium amide:
Self-dissociation
Like water, liquid ammonia undergoes molecular autoionisation to form its acid and base conjugates:
Ammonia often functions as a weak base, so it has some buffering ability. Shifts in pH will cause more or fewer ammonium cations () and amide anions () to be present in solution. At standard pressure and temperature,
K = = 10−30.
Combustion
Ammonia does not burn readily or sustain combustion, except under narrow fuel-to-air mixtures of 15–25% air by volume. When mixed with oxygen, it burns with a pale yellowish-green flame. Ignition occurs when chlorine is passed into ammonia, forming nitrogen and hydrogen chloride; if chlorine is present in excess, then the highly explosive nitrogen trichloride () is also formed.
The combustion of ammonia to form nitrogen and water is exothermic:
, ΔH°r = −1267.20 kJ (or −316.8 kJ/mol if expressed per mol of )
The standard enthalpy change of combustion, ΔH°c, expressed per mole of ammonia and with condensation of the water formed, is −382.81 kJ/mol. Dinitrogen is the thermodynamic product of combustion: all nitrogen oxides are unstable with respect to and , which is the principle behind the catalytic converter. Nitrogen oxides can be formed as kinetic products in the presence of appropriate catalysts, a reaction of great industrial importance in the production of nitric acid:
A subsequent reaction leads to :
The combustion of ammonia in air is very difficult in the absence of a catalyst (such as platinum gauze or warm chromium(III) oxide), due to the relatively low heat of combustion, a lower laminar burning velocity, high auto-ignition temperature, high heat of vapourization, and a narrow flammability range. However, recent studies have shown that efficient and stable combustion of ammonia can be achieved using swirl combustors, thereby rekindling research interest in ammonia as a fuel for thermal power production. The flammable range of ammonia in dry air is 15.15–27.35% and in 100% relative humidity air is 15.95–26.55%. For studying the kinetics of ammonia combustion, knowledge of a detailed reliable reaction mechanism is required, but this has been challenging to obtain.
Formation of other compounds
Ammonia is a direct or indirect precursor to most manufactured nitrogen-containing compounds.
In organic chemistry, ammonia can act as a nucleophile in substitution reactions. Amines can be formed by the reaction of ammonia with alkyl halides or with alcohols. The resulting − group is also nucleophilic so secondary and tertiary amines are often formed. When such multiple substitution is not desired, an excess of ammonia helps minimise it. For example, methylamine is prepared by the reaction of ammonia with chloromethane or with methanol. In both cases, dimethylamine and trimethylamine are co-produced. Ethanolamine is prepared by a ring-opening reaction with ethylene oxide, and when the reaction is allowed to go further it produces diethanolamine and triethanolamine. The reaction of ammonia with 2-bromopropanoic acid has been used to prepare racemic alanine in 70% yield.
Amides can be prepared by the reaction of ammonia with carboxylic acid derivatives. For example, ammonia reacts with formic acid (HCOOH) to yield formamide () when heated. Acyl chlorides are the most reactive, but the ammonia must be present in at least a twofold excess to neutralise the hydrogen chloride formed. Esters and anhydrides also react with ammonia to form amides. Ammonium salts of carboxylic acids can be dehydrated to amides by heating to 150–200 °C as long as no thermally sensitive groups are present.
The hydrogen in ammonia is susceptible to replacement by a myriad of substituents. When dry ammonia gas is heated with metallic sodium it converts to sodamide, . With chlorine, monochloramine is formed.
Pentavalent ammonia is known as λ5-amine, nitrogen pentahydride, or more commonly, ammonium hydride . This crystalline solid is only stable under high pressure and decomposes back into trivalent ammonia (λ3-amine) and hydrogen gas at normal conditions. This substance was once investigated as a possible solid rocket fuel in 1966.
Ammonia as a ligand
Ammonia can act as a ligand in transition metal complexes. It is a pure σ-donor, in the middle of the spectrochemical series, and shows intermediate hard–soft behaviour (see also ECW model). Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. For historical reasons, ammonia is named ammine in the nomenclature of coordination compounds. Some notable ammine complexes include tetraamminediaquacopper(II) (), a dark blue complex formed by adding ammonia to a solution of copper(II) salts. Tetraamminediaquacopper(II) hydroxide is known as Schweizer's reagent, and has the remarkable ability to dissolve cellulose. Diamminesilver(I) () is the active species in Tollens' reagent. Formation of this complex can also help to distinguish between precipitates of the different silver halides: silver chloride (AgCl) is soluble in dilute (2 M) ammonia solution, silver bromide (AgBr) is only soluble in concentrated ammonia solution, whereas silver iodide (AgI) is insoluble in aqueous ammonia.
Ammine complexes of chromium(III) were known in the late 19th century, and formed the basis of Alfred Werner's revolutionary theory on the structure of coordination compounds. Werner noted only two isomers (fac- and mer-) of the complex could be formed, and concluded the ligands must be arranged around the metal ion at the vertices of an octahedron. This proposal has since been confirmed by X-ray crystallography.
An ammine ligand bound to a metal ion is markedly more acidic than a free ammonia molecule, although deprotonation in aqueous solution is still rare. One example is the reaction of mercury(II) chloride with ammonia (Calomel reaction) where the resulting mercuric amidochloride is highly insoluble.
Ammonia forms 1:1 adducts with a variety of Lewis acids such as , phenol, and . Ammonia is a hard base (HSAB theory) and its E & C parameters are EB = 2.31 and CB = 2.04. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots.
Detection and determination
Ammonia in solution
Ammonia and ammonium salts can be readily detected, in very minute traces, by the addition of Nessler's solution, which gives a distinct yellow colouration in the presence of the slightest trace of ammonia or ammonium salts. The amount of ammonia in ammonium salts can be estimated quantitatively by distillation of the salts with sodium (NaOH) or potassium hydroxide (KOH), the ammonia evolved being absorbed in a known volume of standard sulfuric acid and the excess of acid then determined volumetrically; or the ammonia may be absorbed in hydrochloric acid and the ammonium chloride so formed precipitated as ammonium hexachloroplatinate, .
Gaseous ammonia
Sulfur sticks are burnt to detect small leaks in industrial ammonia refrigeration systems. Larger quantities can be detected by warming the salts with a caustic alkali or with quicklime, when the characteristic smell of ammonia will be at once apparent. Ammonia is an irritant and irritation increases with concentration; the permissible exposure limit is 25 ppm, and lethal above 500 ppm. Higher concentrations are hardly detected by conventional detectors, the type of detector is chosen according to the sensitivity required (e.g. semiconductor, catalytic, electrochemical). Holographic sensors have been proposed for detecting concentrations up to 12.5% in volume.
Ammoniacal nitrogen (NH3–N)
Ammoniacal nitrogen (NH3–N) is a measure commonly used for testing the quantity of ammonium ions, derived naturally from ammonia, and returned to ammonia via organic processes, in water or waste liquids. It is a measure used mainly for quantifying values in waste treatment and water purification systems, as well as a measure of the health of natural and man-made water reserves. It is measured in units of mg/L (milligram per litre).
History
The ancient Greek historian Herodotus mentioned that there were outcrops of salt in an area of Libya that was inhabited by a people called the 'Ammonians' (now the Siwa oasis in northwestern Egypt, where salt lakes still exist). The Greek geographer Strabo also mentioned the salt from this region. However, the ancient authors Dioscorides, Apicius, Arrian, Synesius, and Aëtius of Amida described this salt as forming clear crystals that could be used for cooking and that were essentially rock salt. Hammoniacus sal appears in the writings of Pliny, although it is not known whether the term is identical with the more modern sal ammoniac (ammonium chloride).
The fermentation of urine by bacteria produces a solution of ammonia; hence fermented urine was used in Classical Antiquity to wash cloth and clothing, to remove hair from hides in preparation for tanning, to serve as a mordant in dying cloth, and to remove rust from iron. It was also used by ancient dentists to wash teeth.
In the form of sal ammoniac (نشادر, nushadir), ammonia was important to the Muslim alchemists. It was mentioned in the Book of Stones, likely written in the 9th century and attributed to Jābir ibn Hayyān. It was also important to the European alchemists of the 13th century, being mentioned by Albertus Magnus. It was also used by dyers in the Middle Ages in the form of fermented urine to alter the colour of vegetable dyes. In the 15th century, Basilius Valentinus showed that ammonia could be obtained by the action of alkalis on sal ammoniac. At a later period, when sal ammoniac was obtained by distilling the hooves and horns of oxen and neutralizing the resulting carbonate with hydrochloric acid, the name 'spirit of hartshorn' was applied to ammonia.
Gaseous ammonia was first isolated by Joseph Black in 1756 by reacting sal ammoniac (ammonium chloride) with calcined magnesia (magnesium oxide). It was isolated again by Peter Woulfe in 1767, by Carl Wilhelm Scheele in 1770 and by Joseph Priestley in 1773 and was termed by him 'alkaline air'. Eleven years later in 1785, Claude Louis Berthollet ascertained its composition.
The Haber–Bosch process to produce ammonia from the nitrogen in the air was developed by Fritz Haber and Carl Bosch in 1909 and patented in 1910. It was first used on an industrial scale in Germany during World War I, following the allied blockade that cut off the supply of nitrates from Chile. The ammonia was used to produce explosives to sustain war efforts.
Before the availability of natural gas, hydrogen as a precursor to ammonia production was produced via the electrolysis of water or using the chloralkali process.
With the advent of the steel industry in the 20th century, ammonia became a byproduct of the production of coking coal.
Applications
Solvent
Liquid ammonia is the best-known and most widely studied nonaqueous ionising solvent. Its most conspicuous property is its ability to dissolve alkali metals to form highly coloured, electrically conductive solutions containing solvated electrons. Apart from these remarkable solutions, much of the chemistry in liquid ammonia can be classified by analogy with related reactions in aqueous solutions. Comparison of the physical properties of with those of water shows has the lower melting point, boiling point, density, viscosity, dielectric constant and electrical conductivity; this is due at least in part to the weaker hydrogen bonding in and because such bonding cannot form cross-linked networks, since each molecule has only one lone pair of electrons compared with two for each molecule. The ionic self-dissociation constant of liquid at −50 °C is about 10−33.
Solubility of salts
Liquid ammonia is an ionising solvent, although less so than water, and dissolves a range of ionic compounds, including many nitrates, nitrites, cyanides, thiocyanates, metal cyclopentadienyl complexes and metal bis(trimethylsilyl)amides. Most ammonium salts are soluble and act as acids in liquid ammonia solutions. The solubility of halide salts increases from fluoride to iodide. A saturated solution of ammonium nitrate (Divers' solution, named after Edward Divers) contains 0.83 mol solute per mole of ammonia and has a vapour pressure of less than 1 bar even at .
Solutions of metals
Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu and Yb (also Mg using an electrolytic process). At low concentrations (<0.06 mol/L), deep blue solutions are formed: these contain metal cations and solvated electrons, free electrons that are surrounded by a cage of ammonia molecules.
These solutions are very useful as strong reducing agents. At higher concentrations, the solutions are metallic in appearance and in electrical conductivity. At low temperatures, the two types of solution can coexist as immiscible phases.
Redox properties of liquid ammonia
The range of thermodynamic stability of liquid ammonia solutions is very narrow, as the potential for oxidation to dinitrogen, E° (), is only +0.04 V. In practice, both oxidation to dinitrogen and reduction to dihydrogen are slow. This is particularly true of reducing solutions: the solutions of the alkali metals mentioned above are stable for several days, slowly decomposing to the metal amide and dihydrogen. Most studies involving liquid ammonia solutions are done in reducing conditions; although oxidation of liquid ammonia is usually slow, there is still a risk of explosion, particularly if transition metal ions are present as possible catalysts.
Fertiliser
In the US , approximately 88% of ammonia was used as fertilisers either as its salts, solutions or anhydrously. When applied to soil, it helps provide increased yields of crops such as maize and wheat. 30% of agricultural nitrogen applied in the US is in the form of anhydrous ammonia, and worldwide, 110 million tonnes are applied each year.
Precursor to nitrogenous compounds
Ammonia is directly or indirectly the precursor to most nitrogen-containing compounds. Virtually all synthetic nitrogen compounds are derived from ammonia. An important derivative is nitric acid. This key material is generated via the Ostwald process by oxidation of ammonia with air over a platinum catalyst at , ≈9 atm. Nitric oxide is an intermediate in this conversion:
Nitric acid is used for the production of fertilisers, explosives and many organonitrogen compounds.
Ammonia is also used to make the following compounds:
Hydrazine, in the Olin Raschig process and the peroxide process
Hydrogen cyanide, in the BMA process and the Andrussow process
Hydroxylamine and ammonium carbonate, in the Raschig process
Phenol, in the Raschig–Hooker process
Urea, in the Bosch–Meiser urea process and in Wöhler synthesis
Amino acids, using Strecker amino-acid synthesis
Acrylonitrile, in the Sohio process
Ammonia can also be used to make compounds in reactions that are not specifically named. Examples of such compounds include ammonium perchlorate, ammonium nitrate, formamide, dinitrogen tetroxide, alprazolam, ethanolamine, ethyl carbamate, hexamethylenetetramine and ammonium bicarbonate.
Cleansing agent
Household 'ammonia' is a solution of in water, and is used as a general purpose cleaner for many surfaces. Because ammonia results in a relatively streak-free shine, one of its most common uses is to clean glass, porcelain, and stainless steel. It is also frequently used for cleaning ovens and for soaking items to loosen baked-on grime. Household ammonia ranges in concentration by weight from 5% to 10% ammonia. US manufacturers of cleaning products are required to provide the product's material safety data sheet that lists the concentration used.
Solutions of ammonia (5–10% by weight) are used as household cleaners, particularly for glass. These solutions are irritating to the eyes and mucous membranes (respiratory and digestive tracts), and to a lesser extent the skin. Experts advise that caution be used to ensure the chemical is not mixed into any liquid containing bleach, due to the danger of forming toxic chloramine gas. Mixing with chlorine-containing products or strong oxidants, such as household bleach, can generate toxic chloramine fumes.
Experts also warn not to use ammonia-based cleaners (such as glass or window cleaners) on car touchscreens, due to the risk of damage to the screen's anti-glare and anti-fingerprint coatings.
Fermentation
Solutions of ammonia ranging from 16% to 25% are used in the fermentation industry as a source of nitrogen for microorganisms and to adjust pH during fermentation.
Antimicrobial agent for food products
As early as in 1895, it was known that ammonia was 'strongly antiseptic ... it requires 1.4 grams per litre to preserve beef tea (broth).' In one study, anhydrous ammonia destroyed 99.999% of zoonotic bacteria in three types of animal feed, but not silage. Anhydrous ammonia is currently used commercially to reduce or eliminate microbial contamination of beef.
Lean finely textured beef (popularly known as 'pink slime') in the beef industry is made from fatty beef trimmings (c. 50–70% fat) by removing the fat using heat and centrifugation, then treating it with ammonia to kill E. coli. The process was deemed effective and safe by the US Department of Agriculture based on a study that found that the treatment reduces E. coli to undetectable levels. There have been safety concerns about the process as well as consumer complaints about the taste and smell of ammonia-treated beef.
Fuel
Ammonia has been used as fuel, and is a proposed alternative to fossil fuels and hydrogen in the future.
Compared to hydrogen, ammonia is easier to store. Compared to hydrogen as a fuel, ammonia is much more energy efficient, and could be produced, stored and delivered at a much lower cost than hydrogen, which must be kept compressed or as a cryogenic liquid. The raw energy density of liquid ammonia is 11.5 MJ/L, which is about a third that of diesel.
Ammonia can be converted back to hydrogen to be used to power hydrogen fuel cells, or it may be used directly within high-temperature solid oxide direct ammonia fuel cells to provide efficient power sources that do not emit greenhouse gases. Ammonia to hydrogen conversion can be achieved through the sodium amide process or the catalytic decomposition of ammonia using solid catalysts.
Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and the St. Charles Avenue Streetcar line in New Orleans in the 1870s and 1880s, and during World War II ammonia was used to power buses in Belgium.
Ammonia is sometimes proposed as a practical alternative to fossil fuel for internal combustion engines. However, ammonia cannot be easily used in existing Otto cycle engines because of its very narrow flammability range. Despite this, several tests have been run. Its high octane rating of 120 and low flame temperature allows the use of high compression ratios without a penalty of high production. Since ammonia contains no carbon, its combustion cannot produce carbon dioxide, carbon monoxide, hydrocarbons, or soot.
Ammonia production currently creates 1.8% of global emissions. 'Green ammonia' is ammonia produced by using green hydrogen (hydrogen produced by electrolysis), whereas 'blue ammonia' is ammonia produced using blue hydrogen (hydrogen produced by steam methane reforming where the carbon dioxide has been captured and stored).
Rocket engines have also been fueled by ammonia. The Reaction Motors XLR99 rocket engine that powered the hypersonic research aircraft used liquid ammonia. Although not as powerful as other fuels, it left no soot in the reusable rocket engine, and its density approximately matches the density of the oxidiser, liquid oxygen, which simplified the aircraft's design.
In early August 2018, scientists from Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) announced the success of developing a process to release hydrogen from ammonia and harvest that at ultra-high purity as a fuel for cars. This uses a special membrane. Two demonstration fuel cell vehicles have the technology, a Hyundai Nexo and Toyota Mirai.
In 2020, Saudi Arabia shipped 40 metric tons of liquid 'blue ammonia' to Japan for use as a fuel. It was produced as a by-product by petrochemical industries, and can be burned without giving off greenhouse gases. Its energy density by volume is nearly double that of liquid hydrogen. If the process of creating it can be scaled up via purely renewable resources, producing green ammonia, it could make a major difference in avoiding climate change. The company ACWA Power and the city of Neom have announced the construction of a green hydrogen and ammonia plant in 2020.
Green ammonia is considered as a potential fuel for future container ships. In 2020, the companies DSME and MAN Energy Solutions announced the construction of an ammonia-based ship, DSME plans to commercialize it by 2025. The use of ammonia as a potential alternative fuel for aircraft jet engines is also being explored.
Japan intends to implement a plan to develop ammonia co-firing technology that can increase the use of ammonia in power generation, as part of efforts to assist domestic and other Asian utilities to accelerate their transition to carbon neutrality.
In October 2021, the first International Conference on Fuel Ammonia (ICFA2021) was held.
In June 2022, IHI Corporation succeeded in reducing greenhouse gases by over 99% during combustion of liquid ammonia in a 2,000-kilowatt-class gas turbine achieving truly -free power generation.
In July 2022, Quad nations of Japan, the U.S., Australia and India agreed to promote technological development for clean-burning hydrogen and ammonia as fuels at the security grouping's first energy meeting. , however, significant amounts of are produced. Nitrous oxide may also be a problem.
Other
Remediation of gaseous emissions
Ammonia is used to scrub from the burning of fossil fuels, and the resulting product is converted to ammonium sulfate for use as fertiliser. Ammonia neutralises the nitrogen oxide () pollutants emitted by diesel engines. This technology, called SCR (selective catalytic reduction), relies on a vanadia-based catalyst.
Ammonia may be used to mitigate gaseous spills of phosgene.
As a hydrogen carrier
Due to its attributes, being liquid at ambient temperature under its own vapour pressure and having high volumetric and gravimetric energy density, ammonia is considered a suitable carrier for hydrogen, and may be cheaper than direct transport of liquid hydrogen.
Refrigeration – R717
Because of ammonia's vapourization properties, it is a useful refrigerant. It was commonly used before the popularisation of chlorofluorocarbons (Freons). Anhydrous ammonia is widely used in industrial refrigeration applications and hockey rinks because of its high energy efficiency and low cost. It suffers from the disadvantage of toxicity, and requiring corrosion resistant components, which restricts its domestic and small-scale use. Along with its use in modern vapour-compression refrigeration it is used in a mixture along with hydrogen and water in absorption refrigerators. The Kalina cycle, which is of growing importance to geothermal power plants, depends on the wide boiling range of the ammonia–water mixture. Ammonia coolant is also used in the S1 radiator aboard the International Space Station in two loops that are used to regulate the internal temperature and enable temperature-dependent experiments.
The potential importance of ammonia as a refrigerant has increased with the discovery that vented CFCs and HFCs are extremely potent and stable greenhouse gases.
Stimulant
Ammonia, as the vapour released by smelling salts, has found significant use as a respiratory stimulant. Ammonia is commonly used in the illegal manufacture of methamphetamine through a Birch reduction. The Birch method of making methamphetamine is dangerous because the alkali metal and liquid ammonia are both extremely reactive, and the temperature of liquid ammonia makes it susceptible to explosive boiling when reactants are added.
Textile
Liquid ammonia is used for treatment of cotton materials, giving properties like mercerisation, using alkalis. In particular, it is used for prewashing of wool.
Lifting gas
At standard temperature and pressure, ammonia is less dense than atmosphere and has approximately 45–48% of the lifting power of hydrogen or helium. Ammonia has sometimes been used to fill balloons as a lifting gas. Because of its relatively high boiling point (compared to helium and hydrogen), ammonia could potentially be refrigerated and liquefied aboard an airship to reduce lift and add ballast (and returned to a gas to add lift and reduce ballast).
Fuming
Ammonia has been used to darken quartersawn white oak in Arts & Crafts and Mission-style furniture. Ammonia fumes react with the natural tannins in the wood and cause it to change colour.
Safety
The US Occupational Safety and Health Administration (OSHA) has set a 15-minute exposure limit for gaseous ammonia of 35 ppm by volume in the environmental air and an 8-hour exposure limit of 25 ppm by volume. The National Institute for Occupational Safety and Health (NIOSH) recently reduced the IDLH (Immediately Dangerous to Life and Health, the level to which a healthy worker can be exposed for 30 minutes without suffering irreversible health effects) from 500 to 300 based on recent more conservative interpretations of original research in 1943. Other organisations have varying exposure levels. US Navy Standards [U.S. Bureau of Ships 1962] maximum allowable concentrations (MACs): for continuous exposure (60 days) is 25 ppm; for exposure of 1 hour is 400 ppm.
Ammonia vapour has a sharp, irritating, pungent odor that acts as a warning of potentially dangerous exposure. The average odor threshold is 5 ppm, well below any danger or damage. Exposure to very high concentrations of gaseous ammonia can result in lung damage and death. Ammonia is regulated in the US as a non-flammable gas, but it meets the definition of a material that is toxic by inhalation and requires a hazardous safety permit when transported in quantities greater than .
Liquid ammonia is dangerous because it is hygroscopic and because it can cause caustic burns. See for more information.
Toxicity
The toxicity of ammonia solutions does not usually cause problems for humans and other mammals, as a specific mechanism exists to prevent its build-up in the bloodstream. Ammonia is converted to carbamoyl phosphate by the enzyme carbamoyl phosphate synthetase, and then enters the urea cycle to be either incorporated into amino acids or excreted in the urine. Fish and amphibians lack this mechanism, as they can usually eliminate ammonia from their bodies by direct excretion. Ammonia even at dilute concentrations is highly toxic to aquatic animals, and for this reason it is classified as dangerous for the environment. Atmospheric ammonia plays a key role in the formation of fine particulate matter.
Ammonia is a constituent of tobacco smoke.
Coking wastewater
Ammonia is present in coking wastewater streams, as a liquid by-product of the production of coke from coal. In some cases, the ammonia is discharged to the marine environment where it acts as a pollutant. The Whyalla steelworks in South Australia is one example of a coke-producing facility that discharges ammonia into marine waters.
Aquaculture
Ammonia toxicity is believed to be a cause of otherwise unexplained losses in fish hatcheries. Excess ammonia may accumulate and cause alteration of metabolism or increases in the body pH of the exposed organism. Tolerance varies among fish species. At lower concentrations, around 0.05 mg/L, un-ionised ammonia is harmful to fish species and can result in poor growth and feed conversion rates, reduced fecundity and fertility and increase stress and susceptibility to bacterial infections and diseases. Exposed to excess ammonia, fish may suffer loss of equilibrium, hyper-excitability, increased respiratory activity and oxygen uptake and increased heart rate. At concentrations exceeding 2.0 mg/L, ammonia causes gill and tissue damage, extreme lethargy, convulsions, coma, and death. Experiments have shown that the lethal concentration for a variety of fish species ranges from 0.2 to 2.0 mg/L.
During winter, when reduced feeds are administered to aquaculture stock, ammonia levels can be higher. Lower ambient temperatures reduce the rate of algal photosynthesis so less ammonia is removed by any algae present. Within an aquaculture environment, especially at large scale, there is no fast-acting remedy to elevated ammonia levels. Prevention rather than correction is recommended to reduce harm to farmed fish and in open water systems, the surrounding environment.
Storage information
Similar to propane, anhydrous ammonia boils below room temperature when at atmospheric pressure. A storage vessel capable of is suitable to contain the liquid. Ammonia is used in numerous different industrial applications requiring carbon or stainless steel storage vessels. Ammonia with at least 0.2% by weight water content is not corrosive to carbon steel. carbon steel construction storage tanks with 0.2% by weight or more of water could last more than 50 years in service. Experts warn that ammonium compounds not be allowed to come in contact with bases (unless in an intended and contained reaction), as dangerous quantities of ammonia gas could be released.
Laboratory
The hazards of ammonia solutions depend on the concentration: 'dilute' ammonia solutions are usually 5–10% by weight (< 5.62 mol/L); 'concentrated' solutions are usually prepared at >25% by weight. A 25% (by weight) solution has a density of 0.907 g/cm3, and a solution that has a lower density will be more concentrated. The European Union classification of ammonia solutions is given in the table.
The ammonia vapour from concentrated ammonia solutions is severely irritating to the eyes and the respiratory tract, and experts warn that these solutions only be handled in a fume hood. Saturated ('0.880' – see ) solutions can develop a significant pressure inside a closed bottle in warm weather, and experts also warn that the bottle be opened with care. This is not usually a problem for 25% ('0.900') solutions.
Experts warn that ammonia solutions not be mixed with halogens, as toxic and/or explosive products are formed. Experts also warn that prolonged contact of ammonia solutions with silver, mercury or iodide salts can also lead to explosive products: such mixtures are often formed in qualitative inorganic analysis, and that it needs to be lightly acidified but not concentrated (<6% w/v) before disposal once the test is completed.
Laboratory use of anhydrous ammonia (gas or liquid)
Anhydrous ammonia is classified as toxic (T) and dangerous for the environment (N). The gas is flammable (autoignition temperature: 651 °C) and can form explosive mixtures with air (16–25%). The permissible exposure limit (PEL) in the United States is 50 ppm (35 mg/m3), while the IDLH concentration is estimated at 300 ppm. Repeated exposure to ammonia lowers the sensitivity to the smell of the gas: normally the odour is detectable at concentrations of less than 50 ppm, but desensitised individuals may not detect it even at concentrations of 100 ppm. Anhydrous ammonia corrodes copper- and zinc-containing alloys, which makes brass fittings not appropriate for handling the gas. Liquid ammonia can also attack rubber and certain plastics.
Ammonia reacts violently with the halogens. Nitrogen triiodide, a primary high explosive, is formed when ammonia comes in contact with iodine. Ammonia causes the explosive polymerisation of ethylene oxide. It also forms explosive fulminating compounds with compounds of gold, silver, mercury, germanium or tellurium, and with stibine. Violent reactions have also been reported with acetaldehyde, hypochlorite solutions, potassium ferricyanide and peroxides.
Production
Ammonia has one of the highest rates of production of any inorganic chemical. Production is sometimes expressed in terms of 'fixed nitrogen'. Global production was estimated as being 160 million tonnes in 2020 (147 tons of fixed nitrogen). China accounted for 26.5% of that, followed by Russia at 11.0%, the United States at 9.5%, and India at 8.3%.
Before the start of World War I, most ammonia was obtained by the dry distillation of nitrogenous vegetable and animal waste products, including camel dung, where it was distilled by the reduction of nitrous acid and nitrites with hydrogen; in addition, it was produced by the distillation of coal, and also by the decomposition of ammonium salts by alkaline hydroxides such as quicklime:
For small scale laboratory synthesis, one can heat urea and calcium hydroxide:
Haber–Bosch
Electrochemical
Ammonia can be synthesized electrochemically. The only required inputs are sources of nitrogen (potentially atmospheric) and hydrogen (water), allowing generation at the point of use. The availability of renewable energy creates the possibility of zero emission production.
'Green Ammonia' is a name for ammonia produced from hydrogen that is in turn produced from carbon-free sources such as electrolysis of water. Ammonia from this source can be used as a liquid fuel with zero contribution to global climate change.
Another electrochemical synthesis mode involves the reductive formation of lithium nitride, which can be protonated to ammonia, given a proton source. Ethanol has been used as such a source, although it may degrade. The first use of this chemistry was reported in 1930, where lithium solutions in ethanol were used to produce ammonia at pressures of up to 1000 bar. In 1994, Tsuneto et al. used lithium electrodeposition in tetrahydrofuran to synthesize ammonia at more moderate pressures with reasonable Faradaic efficiency. Other studies have since used the ethanol–tetrahydrofuran system for electrochemical ammonia synthesis. In 2019, Lazouski et al. proposed a mechanism to explain observed ammonia formation kinetics.
In 2020, Lazouski et al. developed a solvent-agnostic gas diffusion electrode to improve nitrogen transport to the reactive lithium. The study observed production rates of up to 30 ± 5 nmol/s/cm2 and Faradaic efficiencies of up to 47.5 ± 4% at ambient temperature and 1 bar pressure.
In 2021, Suryanto et al. replaced ethanol with a tetraalkyl phosphonium salt. This cation can stably undergo deprotonation–reprotonation cycles, while it enhances the medium's ionic conductivity. The study observed production rates of 53 ± 1 nmol/s/cm2 at 69 ± 1% faradaic efficiency experiments under 0.5-bar hydrogen and 19.5-bar nitrogen partial pressure at ambient temperature.
Role in biological systems and human disease
Ammonia is both a metabolic waste and a metabolic input throughout the biosphere. It is an important source of nitrogen for living systems. Although atmospheric nitrogen abounds (more than 75%), few living creatures are capable of using atmospheric nitrogen in its diatomic form, gas. Therefore, nitrogen fixation is required for the synthesis of amino acids, which are the building blocks of protein. Some plants rely on ammonia and other nitrogenous wastes incorporated into the soil by decaying matter. Others, such as nitrogen-fixing legumes, benefit from symbiotic relationships with rhizobia bacteria that create ammonia from atmospheric nitrogen.
In humans, inhaling ammonia in high concentrations can be fatal. Exposure to ammonia can cause headaches, edema, impaired memory, seizures and coma as it is neurotoxic in nature.
Biosynthesis
In certain organisms, ammonia is produced from atmospheric nitrogen by enzymes called nitrogenases. The overall process is called nitrogen fixation. Intense effort has been directed toward understanding the mechanism of biological nitrogen fixation. The scientific interest in this problem is motivated by the unusual structure of the active site of the enzyme, which consists of an ensemble.
Ammonia is also a metabolic product of amino acid deamination catalyzed by enzymes such as glutamate dehydrogenase 1. Ammonia excretion is common in aquatic animals. In humans, it is quickly converted to urea (by liver), which is much less toxic, particularly less basic. This urea is a major component of the dry weight of urine. Most reptiles, birds, insects, and snails excrete uric acid solely as nitrogenous waste.
Physiology
Ammonia plays a role in both normal and abnormal animal physiology. It is biosynthesised through normal amino acid metabolism and is toxic in high concentrations. The liver converts ammonia to urea through a series of reactions known as the urea cycle. Liver dysfunction, such as that seen in cirrhosis, may lead to elevated amounts of ammonia in the blood (hyperammonemia). Likewise, defects in the enzymes responsible for the urea cycle, such as ornithine transcarbamylase, lead to hyperammonemia. Hyperammonemia contributes to the confusion and coma of hepatic encephalopathy, as well as the neurologic disease common in people with urea cycle defects and organic acidurias.
Ammonia is important for normal animal acid/base balance. After formation of ammonium from glutamine, α-ketoglutarate may be degraded to produce two bicarbonate ions, which are then available as buffers for dietary acids. Ammonium is excreted in the urine, resulting in net acid loss. Ammonia may itself diffuse across the renal tubules, combine with a hydrogen ion, and thus allow for further acid excretion.
Excretion
Ammonium ions are a toxic waste product of metabolism in animals. In fish and aquatic invertebrates, it is excreted directly into the water. In mammals, sharks, and amphibians, it is converted in the urea cycle to urea, which is less toxic and can be stored more efficiently. In birds, reptiles, and terrestrial snails, metabolic ammonium is converted into uric acid, which is solid and can therefore be excreted with minimal water loss.
Beyond Earth
Ammonia has been detected in the atmospheres of the giant planets Jupiter, Saturn, Uranus and Neptune, along with other gases such as methane, hydrogen, and helium. The interior of Saturn may include frozen ammonia crystals. It is found on Deimos and Phobos – the two moons of Mars.
Interstellar space
Ammonia was first detected in interstellar space in 1968, based on microwave emissions from the direction of the galactic core. This was the first polyatomic molecule to be so detected.
The sensitivity of the molecule to a broad range of excitations and the ease with which it can be observed in a number of regions has made ammonia one of the most important molecules for studies of molecular clouds. The relative intensity of the ammonia lines can be used to measure the temperature of the emitting medium.
The following isotopic species of ammonia have been detected: , , , , and . The detection of triply deuterated ammonia was considered a surprise as deuterium is relatively scarce. It is thought that the low-temperature conditions allow this molecule to survive and accumulate.
Since its interstellar discovery, has proved to be an invaluable spectroscopic tool in the study of the interstellar medium. With a large number of transitions sensitive to a wide range of excitation conditions, has been widely astronomically detected – its detection has been reported in hundreds of journal articles. Listed below is a sample of journal articles that highlights the range of detectors that have been used to identify ammonia.
The study of interstellar ammonia has been important to a number of areas of research in the last few decades. Some of these are delineated below and primarily involve using ammonia as an interstellar thermometer.
Interstellar formation mechanisms
The interstellar abundance for ammonia has been measured for a variety of environments. The []/[] ratio has been estimated to range from 10−7 in small dark clouds up to 10−5 in the dense core of the Orion molecular cloud complex. Although a total of 18 total production routes have been proposed, the principal formation mechanism for interstellar is the reaction:
The rate constant, k, of this reaction depends on the temperature of the environment, with a value of at 10 K. The rate constant was calculated from the formula . For the primary formation reaction, and . Assuming an abundance of and an electron abundance of 10−7 typical of molecular clouds, the formation will proceed at a rate of in a molecular cloud of total density .
All other proposed formation reactions have rate constants of between two and 13 orders of magnitude smaller, making their contribution to the abundance of ammonia relatively insignificant. As an example of the minor contribution other formation reactions play, the reaction:
has a rate constant of 2.2. Assuming densities of 105 and []/[] ratio of 10−7, this reaction proceeds at a rate of 2.2, more than three orders of magnitude slower than the primary reaction above.
Some of the other possible formation reactions are:
Interstellar destruction mechanisms
There are 113 total proposed reactions leading to the destruction of . Of these, 39 were tabulated in extensive tables of the chemistry among C, N and O compounds. A review of interstellar ammonia cites the following reactions as the principal dissociation mechanisms:
with rate constants of 4.39×10−9 and 2.2×10−9, respectively. The above equations (, ) run at a rate of 8.8×10−9 and 4.4×10−13, respectively. These calculations assumed the given rate constants and abundances of []/[] = 10−5, []/[] = 2×10−5, []/[] = 2×10−9, and total densities of n = 105, typical of cold, dense, molecular clouds. Clearly, between these two primary reactions, equation () is the dominant destruction reaction, with a rate ≈10,000 times faster than equation (). This is due to the relatively high abundance of .
Single antenna detections
Radio observations of from the Effelsberg 100-m Radio Telescope reveal that the ammonia line is separated into two components – a background ridge and an unresolved core. The background corresponds well with the locations previously detected CO. The 25 m Chilbolton telescope in England detected radio signatures of ammonia in H II regions, HNH2O masers, H–H objects, and other objects associated with star formation. A comparison of emission line widths indicates that turbulent or systematic velocities do not increase in the central cores of molecular clouds.
Microwave radiation from ammonia was observed in several galactic objects including W3(OH), Orion A, W43, W51, and five sources in the galactic centre. The high detection rate indicates that this is a common molecule in the interstellar medium and that high-density regions are common in the galaxy.
Interferometric studies
VLA observations of in seven regions with high-velocity gaseous outflows revealed condensations of less than 0.1 pc in L1551, S140, and Cepheus A. Three individual condensations were detected in Cepheus A, one of them with a highly elongated shape. They may play an important role in creating the bipolar outflow in the region.
Extragalactic ammonia was imaged using the VLA in IC 342. The hot gas has temperatures above 70 K, which was inferred from ammonia line ratios and appears to be closely associated with the innermost portions of the nuclear bar seen in CO. was also monitored by VLA toward a sample of four galactic ultracompact HII regions: G9.62+0.19, G10.47+0.03, G29.96-0.02, and G31.41+0.31. Based upon temperature and density diagnostics, it is concluded that in general such clumps are probably the sites of massive star formation in an early evolutionary phase prior to the development of an ultracompact HII region.
Infrared detections
Absorption at 2.97 micrometres due to solid ammonia was recorded from interstellar grains in the Becklin–Neugebauer Object and probably in NGC 2264-IR as well. This detection helped explain the physical shape of previously poorly understood and related ice absorption lines.
A spectrum of the disk of Jupiter was obtained from the Kuiper Airborne Observatory, covering the 100 to 300 cm−1 spectral range. Analysis of the spectrum provides information on global mean properties of ammonia gas and an ammonia ice haze.
A total of 149 dark cloud positions were surveyed for evidence of 'dense cores' by using the (J,K) = (1,1) rotating inversion line of NH3. In general, the cores are not spherically shaped, with aspect ratios ranging from 1.1 to 4.4. It is also found that cores with stars have broader lines than cores without stars.
Ammonia has been detected in the Draco Nebula and in one or possibly two molecular clouds, which are associated with the high-latitude galactic infrared cirrus. The finding is significant because they may represent the birthplaces for the Population I metallicity B-type stars in the galactic halo that could have been borne in the galactic disk.
Observations of nearby dark clouds
By balancing and stimulated emission with spontaneous emission, it is possible to construct a relation between excitation temperature and density. Moreover, since the transitional levels of ammonia can be approximated by a 2-level system at low temperatures, this calculation is fairly simple. This premise can be applied to dark clouds, regions suspected of having extremely low temperatures and possible sites for future star formation. Detections of ammonia in dark clouds show very narrow linesindicative not only of low temperatures, but also of a low level of inner-cloud turbulence. Line ratio calculations provide a measurement of cloud temperature that is independent of previous CO observations. The ammonia observations were consistent with CO measurements of rotation temperatures of ≈10 K. With this, densities can be determined, and have been calculated to range between 104 and 105 cm−3 in dark clouds. Mapping of gives typical clouds sizes of 0.1 pc and masses near 1 solar mass. These cold, dense cores are the sites of future star formation.
UC HII regions
Ultra-compact HII regions are among the best tracers of high-mass star formation. The dense material surrounding UCHII regions is likely primarily molecular. Since a complete study of massive star formation necessarily involves the cloud from which the star formed, ammonia is an invaluable tool in understanding this surrounding molecular material. Since this molecular material can be spatially resolved, it is possible to constrain the heating/ionising sources, temperatures, masses, and sizes of the regions. Doppler-shifted velocity components allow for the separation of distinct regions of molecular gas that can trace outflows and hot cores originating from forming stars.
Extragalactic detection
Ammonia has been detected in external galaxies, and by simultaneously measuring several lines, it is possible to directly measure the gas temperature in these galaxies. Line ratios imply that gas temperatures are warm (≈50 K), originating from dense clouds with sizes of tens of parsecs. This picture is consistent with the picture within our Milky Way galaxyhot dense molecular cores form around newly forming stars embedded in larger clouds of molecular material on the scale of several hundred parsecs (giant molecular clouds; GMCs).
See also
Notes
References
Works cited
Further reading
External links
International Chemical Safety Card 0414 (anhydrous ammonia), ilo.org.
International Chemical Safety Card 0215 (aqueous solutions), ilo.org.
Emergency Response to Ammonia Fertiliser Releases (Spills) for the Minnesota Department of Agriculture.ammoniaspills.org
National Institute for Occupational Safety and Health – Ammonia Page, cdc.gov
NIOSH Pocket Guide to Chemical Hazards – Ammonia, cdc.gov
Ammonia, video
Bases (chemistry)
Foul-smelling chemicals
Gaseous signaling molecules
Household chemicals
Industrial gases
Inorganic solvents
Nitrogen cycle
Nitrogen hydrides
Nitrogen(−III) compounds
Refrigerants
Toxicology
|
https://en.wikipedia.org/wiki/Alabaster
|
Alabaster is a mineral and a soft rock used for carvings and as a source of plaster powder. Archaeologists, geologists, and the stone industry have different definitions and usages for the word alabaster. In archaeology, the term alabaster is a category of objects and artefacts made from the varieties of two different minerals: (i) the fine-grained, massive type of gypsum, and (ii) the fine-grained, banded type of calcite.
In geology, gypsum is a type of alabaster that chemically is a hydrous sulfate of calcium, whereas calcite is a carbonate of calcium. As types of alabaster, gypsum and calcite have similar properties, such as light color, translucence, and soft stones that can be carved and sculpted; thus the historical use and application of alabaster for the production of carved, decorative artefacts and objets d’art. Calcite alabaster also is known as: onyx-marble, Egyptian alabaster, and Oriental alabaster, which terms usually describe either a compact, banded travertine stone or a stalagmitic limestone colored with swirling bands of cream and brown.
In general, ancient alabaster is calcite in the wider Middle East, including Egypt and Mesopotamia, while it is gypsum in medieval Europe. Modern alabaster is most likely calcite but may be either. Both are easy to work and slightly soluble in water. They have been used for making a variety of indoor artwork and carving, as they will not survive long outdoors.
The two kinds are readily distinguished by their different hardnesses: gypsum alabaster (Mohs hardness 1.5 to 2) is so soft that a fingernail scratches it, while calcite (Mohs hardness 3) cannot be scratched in this way but yields to a knife. Moreover, calcite alabaster, being a carbonate, effervesces when treated with hydrochloric acid, while gypsum alabaster remains almost unaffected.
Etymology
The English word "alabaster" was borrowed from Old French , in turn derived from Latin , and that from Greek () or (). The Greek words denoted a vase of alabaster.
The name may be derived further from ancient Egyptian , which refers to vessels of the Egyptian goddess Bast. She was represented as a lioness and frequently depicted as such in figures placed atop these alabaster vessels. Ancient Roman authors Pliny the Elder and Ptolemy wrote that the stone used for ointment jars called alabastra came from a region of Egypt known as Alabastron or Alabastrites.
Properties and usability
The purest alabaster is a snow-white material of fine uniform grain, but it often is associated with an oxide of iron, which produces brown clouding and veining in the stone. The coarser varieties of gypsum alabaster are converted by calcination into plaster of Paris, and are sometimes known as "plaster stone".
The softness of alabaster enables it to be carved readily into elaborate forms, but its solubility in water renders it unsuitable for outdoor work. If alabaster with a smooth, polished surface is washed with dishwashing liquid, it will become rough, dull and whiter, losing most of its translucency and lustre. The finer kinds of alabaster are employed largely as an ornamental stone, especially for ecclesiastical decoration and for the rails of staircases and halls.
Modern processing
Working techniques
Alabaster is mined and then sold in blocks to alabaster workshops. There they are cut to the needed size ("squaring"), and then are processed in different techniques: turned on a lathe for round shapes, carved into three-dimensional sculptures, chiselled to produce low relief figures or decoration; and then given an elaborate finish that reveals its transparency, colour, and texture.
Marble imitation
In order to diminish the translucency of the alabaster and to produce an opacity suggestive of true marble, the statues are immersed in a bath of water and heated gradually—nearly to the boiling point—an operation requiring great care, because if the temperature is not regulated carefully, the stone acquires a dead-white, chalky appearance. The effect of heating appears to be a partial dehydration of the gypsum. If properly treated, it very closely resembles true marble and is known as "marmo di Castellina".
Dyeing
Alabaster is a porous stone and can be "dyed" into any colour or shade, a technique used for centuries. For this the stone needs to be fully immersed in various pigmentary solutions and heated to a specific temperature. The technique can be used to disguise alabaster. In this way a very misleading imitation of coral that is called "alabaster coral" is produced.
Types, occurrence, history
Typically only one type is sculpted in any particular cultural environment, but sometimes both have been worked to make similar pieces in the same place and time. This was the case with small flasks of the alabastron type made in Cyprus from the Bronze Age into the Classical period.
Window panels
When cut in thin sheets, alabaster is translucent enough to be used for small windows. It was used for this purpose in Byzantine churches and later in medieval ones, especially in Italy. Large sheets of Aragonese gypsum alabaster are used extensively in the contemporary Cathedral of Our Lady of the Angels, which was dedicated in 2002 by the Los Angeles, California, Archdiocese. The cathedral incorporates special cooling to prevent the panes from overheating and turning opaque. The ancients used the calcite type, while the modern Los Angeles cathedral is using gypsum alabaster. There are also multiple examples of alabaster windows in ordinary village churches and monasteries in northern Spain.
Calcite alabaster
Calcite alabaster, harder than the gypsum variety, was the kind primarily used in ancient Egypt and the wider Middle East (but not Assyrian palace reliefs), and is also used in modern times. It is found as either a stalagmitic deposit from the floor and walls of limestone caverns, or as a kind of travertine, similarly deposited in springs of calcareous water. Its deposition in successive layers gives rise to the banded appearance that the marble often shows on cross-section, from which its name is derived: onyx-marble or alabaster-onyx, or sometimes simply (and wrongly) as onyx.
Egypt and the Middle East
Egyptian alabaster has been worked extensively near Suez and Assiut.
This stone variety is the "alabaster" of the ancient Egyptians and Bible and is often termed Oriental alabaster, since the early examples came from the Far East. The Greek name alabastrites is said to be derived from the town of Alabastron in Egypt, where the stone was quarried. The locality probably owed its name to the mineral; the origin of the mineral name is obscure (though see above).
The "Oriental" alabaster was highly esteemed for making small perfume bottles or ointment vases called alabastra; the vessel name has been suggested as a possible source of the mineral name. In Egypt, craftsmen used alabaster for canopic jars and various other sacred and sepulchral objects. The sarcophagus of Seti I, found in his tomb near Thebes, is on display in Sir John Soane's Museum, London; it is carved in a single block of translucent calcite alabaster from Alabastron.
Algerian onyx-marble has been quarried largely in the province of Oran.
Calcite alabaster was quarried in ancient Israel in the cave known today as the Twins Cave near Beit Shemesh. Herod used this alabaster for baths in his palaces.
North America
In Mexico, there are famous deposits of a delicate green variety at La Pedrara, in the district of Tecali, near Puebla. Onyx-marble occurs also in the district of Tehuacán and at several localities in the US including California, Arizona, Utah, Colorado and Virginia.
Gypsum alabaster
Gypsum alabaster is the softer of the two varieties, the other being calcite alabaster. It was used primarily in medieval Europe, and is also used in modern times.
Ancient and Classical Near East
"Mosul marble" is a kind of gypsum alabaster found in the north of modern Iraq, which was used for the Assyrian palace reliefs of the 9th to 7th centuries BC; these are the largest type of alabaster sculptures to have been regularly made. The relief is very low and the carving detailed, but large rooms were lined with continuous compositions on slabs around high. The Lion Hunt of Ashurbanipal and military Lachish reliefs, both 7th century and in the British Museum, are some of the best known.
Gypsum alabaster was widely used for small sculpture for indoor use in the ancient world, especially in ancient Egypt and Mesopotamia. Fine detail could be obtained in a material with an attractive finish without iron or steel tools. Alabaster was used for vessels dedicated for use in the cult of the deity Bast in the culture of the ancient Egyptians, and thousands of gypsum alabaster artifacts dating to the late 4th millennium BC also have been found in Tell Brak (present day Nagar), in Syria.
In Mesopotamia, gypsum alabaster was the material of choice for figures of deities and devotees in temples, as in a figure believed to represent the deity Abu dating to the first half of the 3rd millennium BC and currently kept in New York.
Aragon, Spain
Much of the world's alabaster extraction is performed in the centre of the Ebro Valley in Aragon, Spain, which has the world's largest known exploitable deposits. According to a brochure published by the Aragon government, alabaster has elsewhere either been depleted, or its extraction is so difficult that it has almost been abandoned or is carried out at a very high cost. There are two separate sites in Aragon, both are located in Tertiary basins. The most important site is the Fuentes-Azaila area, in the Tertiary Ebro Basin. The other is the Calatayud-Teruel Basin, which divides the Iberian Range in two main sectors (NW and SE).
The abundance of Aragonese alabaster was crucial for its use in architecture, sculpture and decoration. There is no record of likely use by pre-Roman cultures, so perhaps the first ones to use alabaster in Aragon were the Romans, who produced vessels from alabaster following the Greek and Egyptian models. It seems that since the reconstruction of the Roman Wall in Zaragoza in the 3rd century AD with alabaster, the use of this material became common in building for centuries. Muslim Saraqusta (today, Zaragoza) was also called "Medina Albaida", the White City, due to the appearance of its alabaster walls and palaces, which stood out among gardens, groves and orchards by the Ebro and Huerva Rivers.
The oldest remains in the Aljafería Palace, together with other interesting elements like capitals, reliefs and inscriptions, were made using alabaster, but it was during the artistic and economic blossoming of the Renaissance that Aragonese alabaster reached its golden age. In the 16th century sculptors in Aragon chose alabaster for their best works. They were adept at exploiting its lighting qualities and generally speaking the finished art pieces retained their natural color.
Volterra (Tuscany)
In Europe, the centre of the alabaster trade today is Florence, Italy. Tuscan alabaster occurs in nodular masses embedded in limestone, interstratified with marls of Miocene and Pliocene age. The mineral is worked largely by means of underground galleries, in the district of Volterra. Several varieties are recognized—veined, spotted, clouded, agatiform, and others. The finest kind, obtained principally from Castellina, is sent to Florence for figure-sculpture, while the common kinds are carved locally, into vases, lights, and various ornamental objects. These items are objects of extensive trade, especially in Florence, Pisa, and Livorno.
In the 3rd century BC the Etruscans used the alabaster of Tuscany from the area of modern-day Volterra to produce funeral urns, possibly taught by Greek artists. During the Middle Ages the craft of alabaster was almost completely forgotten. A revival started in the mid-16th century, and until the beginning of the 17th century alabaster work was strictly artistic and did not expand to form a large industry.
In the 17th and 18th centuries production of artistic, high-quality Renaissance-style artifacts stopped altogether, being replaced by less sophisticated, cheaper items better suited for large-scale production and commerce. The new industry prospered, but the reduced need of skilled craftsmen left only few still working. The 19th century brought a boom to the industry, largely due to the "traveling artisans" who went and offered their wares to the palaces of Europe, as well as to America and the East.
In the 19th century new processing technology was also introduced, allowing for the production of custom-made, unique pieces, as well as the combination of alabaster with other materials. Apart from the newly developed craft, artistic work became again possible, chiefly by Volterran sculptor Albino Funaioli. After a short slump, the industry was revived again by the sale of mass-produced mannerist Expressionist sculptures, and was further enhanced in the 1920s by a new branch creating ceiling and wall lamps in the Art Deco style and culminating in the participation at the 1925 International Exposition of Modern Industrial and Decorative Arts from Paris. Important names from the evolution of alabaster use after World War II are Volterran Umberto Borgna, the "first alabaster designer", and later on the architect and industrial designer Angelo Mangiarotti.
England and Wales
Gypsum alabaster is a common mineral, which occurs in England in the Keuper marls of the Midlands, especially at Chellaston in Derbyshire, at Fauld in Staffordshire, and near Newark in Nottinghamshire. Deposits at all of these localities have been worked extensively.
In the 14th and 15th centuries its carving into small statues and sets of relief panels for altarpieces was a valuable local industry in Nottingham, as well as a major English export. These were usually painted, or partly painted. It was also used for the effigies, often life size, on tomb monuments, as the typical recumbent position suited the material's lack of strength, and it was cheaper and easier to work than good marble. After the English Reformation the making of altarpiece sets was discontinued, but funerary monument work in reliefs and statues continued.
Besides examples of these carvings still in Britain (especially at the Nottingham Castle Museum, British Museum, and Victoria and Albert Museum), trade in mineral alabaster (rather than just the antiques trade) has scattered examples in the material that may be found as far afield as the Musée de Cluny, Spain, and Scandinavia.
Alabaster also is found, although in smaller quantity, at Watchet in Somerset, near Penarth in Glamorganshire, and elsewhere. In Cumbria it occurs largely in the New Red rocks, but at a lower geological horizon. The alabaster of Nottinghamshire and Derbyshire is found in thick nodular beds or "floors" in spheroidal masses known as "balls" or "bowls" and in smaller lenticular masses termed "cakes". At Chellaston, where the local alabaster is known as "Patrick", it has been worked into ornaments under the name of "Derbyshire spar"―a term more properly applied to fluorspar.
Black alabaster
Black alabaster is a rare anhydrite form of the gypsum-based mineral. This black form is found in only three veins in the world, one each in United States, Italy, and China.
Alabaster Caverns State Park, near Freedom, Oklahoma, is home to a natural gypsum cave in which much of the gypsum is in the form of alabaster. There are several types of alabaster found at the site, including pink, white, and the rare black alabaster.
Gallery
Ancient and Classical Near East
European Middle Ages
Modern
See also
Mineralogy
– mineral consisting of calcium carbonate (); archaeologists and stone trade professionals, unlike mineralogists, call one variety of calcite "alabaster"
– mineral composed of calcium sulfate dihydrate (); alabaster is one of its varieties
– a mineral closely related to gypsum
– the main inorganic compound () of gypsum
– translucent sheets of marble or alabaster used during the Early Middle Ages for windows instead of glass
Window and roof panels
Chronological list of examples:
– 5th century, Ravenna
– 6th century, Ravenna
– mainly 13th–14th century, Valencia, Spain; the lantern of the octagonal crossing tower
– 14th-century, Orvieto, Umbria, central Italy
– 17th century, Rome; alabaster window by Bernini (1598–1680) used to create a "spotlight"
– 1924, Jerusalem, architect: Antonio Barluzzi. Windows fitted with dyed alabaster panels.
– 1924, Mount Tabor, architect: Antonio Barluzzi. Alabaster roofing was attempted.
References
Further reading
Harrell J.A. (1990), "Misuse of the term 'alabaster' in Egyptology", Göttinger Miszellen, 119, pp. 37–42.
Mackintosh-Smith T. (1999), "Moonglow from Underground". Aramco World May–June 1999.
External links
More about alabaster and travertine, brief guide explaining the confusing, different use of the same terms by geologists, archaeologists and the stone trade. Oxford University Museum of Natural History, 2012
Alabaster Craftmanship in Volterra
Calcium minerals
Carbonate minerals
Sulfate minerals
Minerals
Stone (material)
Sculpture materials
Bastet
|
https://en.wikipedia.org/wiki/Amine
|
In chemistry, amines (, ) are compounds and functional groups that contain a basic nitrogen atom with a lone pair. Amines are formally derivatives of ammonia (), wherein one or more hydrogen atoms have been replaced by a substituent such as an alkyl or aryl group (these may respectively be called alkylamines and arylamines; amines in which both types of substituent are attached to one nitrogen atom may be called alkylarylamines). Important amines include amino acids, biogenic amines, trimethylamine, and aniline. Inorganic derivatives of ammonia are also called amines, such as monochloramine ().
The substituent is called an amino group.
Compounds with a nitrogen atom attached to a carbonyl group, thus having the structure , are called amides and have different chemical properties from amines.
Classification of amines
Amines can be classified according to the nature and number of substituents on nitrogen. Aliphatic amines contain only H and alkyl substituents. Aromatic amines have the nitrogen atom connected to an aromatic ring.
Amines, alkyl and aryl alike, are organized into three subcategories (see table) based on the number of carbon atoms adjacent to the nitrogen(how many hydrogen atoms of the ammonia molecule are replaced by hydrocarbon groups):
Primary (1°) amines—Primary amines arise when one of three hydrogen atoms in ammonia is replaced by an alkyl or aromatic group. Important primary alkyl amines include, methylamine, most amino acids, and the buffering agent tris, while primary aromatic amines include aniline.
Secondary (2°) amines—Secondary amines have two organic substituents (alkyl, aryl or both) bound to the nitrogen together with one hydrogen. Important representatives include dimethylamine, while an example of an aromatic amine would be diphenylamine.
Tertiary (3°) amines—In tertiary amines, nitrogen has three organic substituents. Examples include trimethylamine, which has a distinctively fishy smell, and EDTA.
A fourth subcategory is determined by the connectivity of the substituents attached to the nitrogen:
*Cyclic amines—Cyclic amines are either secondary or tertiary amines. Examples of cyclic amines include the 3-membered ring aziridine and the six-membered ring piperidine. N-methylpiperidine and N-phenylpiperidine are examples of cyclic tertiary amines.
It is also possible to have four organic substituents on the nitrogen. These species are not amines but are quaternary ammonium cations and have a charged nitrogen center. Quaternary ammonium salts exist with many kinds of anions.
Naming conventions
Amines are named in several ways. Typically, the compound is given the prefix "amino-" or the suffix "-amine". The prefix "N-" shows substitution on the nitrogen atom. An organic compound with multiple amino groups is called a diamine, triamine, tetraamine and so forth.
Systematic names for some common amines:
Physical properties
Hydrogen bonding significantly influences the properties of primary and secondary amines. For example, methyl and ethyl amines are gases under standard conditions, whereas the corresponding methyl and ethyl alcohols are liquids. Amines possess a characteristic ammonia smell, liquid amines have a distinctive "fishy" and foul smell.
The nitrogen atom features a lone electron pair that can bind H+ to form an ammonium ion R3NH+. The lone electron pair is represented in this article by a two dots above or next to the N. The water solubility of simple amines is enhanced by hydrogen bonding involving these lone electron pairs. Typically salts of ammonium compounds exhibit the following order of solubility in water: primary ammonium () > secondary ammonium () > tertiary ammonium (R3NH+). Small aliphatic amines display significant solubility in many solvents, whereas those with large substituents are lipophilic. Aromatic amines, such as aniline, have their lone pair electrons conjugated into the benzene ring, thus their tendency to engage in hydrogen bonding is diminished. Their boiling points are high and their solubility in water is low.
Spectroscopic identification
Typically the presence of an amine functional group is deduced by a combination of techniques, including mass spectrometry as well as NMR and IR spectroscopies. 1H NMR signals for amines disappear upon treatment of the sample with D2O. In their infrared spectrum primary amines exhibit two N-H bands, whereas secondary amines exhibit only one.
Structure
Alkyl amines
Alkyl amines characteristically feature tetrahedral nitrogen centers. C-N-C and C-N-H angles approach the idealized angle of 109°. C-N distances are slightly shorter than C-C distances. The energy barrier for the nitrogen inversion of the stereocenter is about 7 kcal/mol for a trialkylamine. The interconversion has been compared to the inversion of an open umbrella into a strong wind.
Amines of the type NHRR' and NRR′R″ are chiral: the nitrogen center bears four substituents counting the lone pair. Because of the low barrier to inversion, amines of the type NHRR' cannot be obtained in optical purity. For chiral tertiary amines, NRR′R″ can only be resolved when the R, R', and R″ groups are constrained in cyclic structures such as N-substituted aziridines (quaternary ammonium salts are resolvable).
Aromatic amines
In aromatic amines ("anilines"), nitrogen is often nearly planar owing to conjugation of the lone pair with the aryl substituent. The C-N distance is correspondingly shorter. In aniline, the C-N distance is the same as the C-C distances.
Basicity
Like ammonia, amines are bases. Compared to alkali metal hydroxides, amines are weaker (see table for examples of conjugate acid Ka values).
The basicity of amines depends on:
The electronic properties of the substituents (alkyl groups enhance the basicity, aryl groups diminish it).
The degree of solvation of the protonated amine, which includes steric hindrance by the groups on nitrogen.
Electronic effects
Owing to inductive effects, the basicity of an amine might be expected to increase with the number of alkyl groups on the amine. Correlations are complicated owing to the effects of solvation which are opposite the trends for inductive effects. Solvation effects also dominate the basicity of aromatic amines (anilines). For anilines, the lone pair of electrons on nitrogen delocalizes into the ring, resulting in decreased basicity. Substituents on the aromatic ring, and their positions relative to the amino group, also affect basicity as seen in the table.
Solvation effects
Solvation significantly affects the basicity of amines. N-H groups strongly interact with water, especially in ammonium ions. Consequently, the basicity of ammonia is enhanced by 1011 by solvation. The intrinsic basicity of amines, i.e. the situation where solvation is unimportant, has been evaluated in the gas phase. In the gas phase, amines exhibit the basicities predicted from the electron-releasing effects of the organic substituents. Thus tertiary amines are more basic than secondary amines, which are more basic than primary amines, and finally ammonia is least basic. The order of pKb's (basicities in water) does not follow this order. Similarly aniline is more basic than ammonia in the gas phase, but ten thousand times less so in aqueous solution.
In aprotic polar solvents such as DMSO, DMF, and acetonitrile the energy of solvation is not as high as in protic polar solvents like water and methanol. For this reason, the basicity of amines in these aprotic solvents is almost solely governed by the electronic effects.
Synthesis
From alcohols
Industrially significant alkyl amines are prepared from ammonia by alkylation with alcohols:
ROH + NH3 -> RNH2 + H2O
From alkyl and aryl halides
Unlike the reaction of amines with alcohols the reaction of amines and ammonia with alkyl halides is used for synthesis in the laboratory:
RX + 2 R'NH2 -> RR'NH + [RR'NH2]X
In such reactions, which are more useful for alkyl iodides and bromides, the degree of alkylation is difficult to control such that one obtains mixtures of primary, secondary, and tertiary amines, as well as quaternary ammonium salts.
Selectivity can be improved via the Delépine reaction, although this is rarely employed on an industrial scale. Selectivity is also assured in the Gabriel synthesis, which involves organohalide reacting with potassium phthalimide.
Aryl halides are much less reactive toward amines and for that reason are more controllable. A popular way to prepare aryl amines is the Buchwald-Hartwig reaction.
From alkenes
Disubstituted alkenes react with HCN in the presence of strong acids to give formamides, which can be decarbonylated. This method, the Ritter reaction, is used industrially to produce tertiary amines such a tert-octylamine.
Hydroamination of alkenes is also widely practiced. The reaction is catalyzed by zeolite-based solid acids.
Reductive routes
Via the process of hydrogenation, unsaturated N-containing functional groups are reduced to amines using hydrogen in the presence of a nickel catalyst. Suitable groups include nitriles, azides, imines including oximes, amides, and nitro. In the case of nitriles, reactions are sensitive to acidic or alkaline conditions, which can cause hydrolysis of the group. is more commonly employed for the reduction of these same groups on the laboratory scale.
Many amines are produced from aldehydes and ketones via reductive amination, which can either proceed catalytically or stoichiometrically.
Aniline () and its derivatives are prepared by reduction of the nitroaromatics. In industry, hydrogen is the preferred reductant, whereas, in the laboratory, tin and iron are often employed.
Specialized methods
Many methods exist for the preparation of amines, many of these methods being rather specialized.
Reactions
Alkylation, acylation, and sulfonation, etc.
Aside from their basicity, the dominant reactivity of amines is their nucleophilicity. Most primary amines are good ligands for metal ions to give coordination complexes. Amines are alkylated by alkyl halides. Acyl chlorides and acid anhydrides react with primary and secondary amines to form amides (the "Schotten–Baumann reaction").
Similarly, with sulfonyl chlorides, one obtains sulfonamides. This transformation, known as the Hinsberg reaction, is a chemical test for the presence of amines.
Because amines are basic, they neutralize acids to form the corresponding ammonium salts . When formed from carboxylic acids and primary and secondary amines, these salts thermally dehydrate to form the corresponding amides.
Amines undergo sulfamation upon treatment with sulfur trioxide or sources thereof:
R2NH + SO3 -> R2NSO3H
Acid-base reactions
Alkyl amines protonate near pH=7 to give alkylammonium derivative.
Diazotization
Amines reacts with nitrous acid to give diazonium salts. The alkyl diazonium salts are of little importance because they are too unstable. The most important members are derivatives of aromatic amines such as aniline ("phenylamine") (A = aryl or naphthyl):
ANH2 + HNO2 + HX -> AN2+ + X- + 2 H2O
Anilines and naphthylamines form more stable diazonium salts, which can be isolated in the crystalline form. Diazonium salts undergo a variety of useful transformations involving replacement of the group with anions. For example, cuprous cyanide gives the corresponding nitriles:
AN2+ + Y- -> AY + N2
Aryldiazonium couple with electron-rich aromatic compounds such as a phenol to form azo compounds. Such reactions are widely applied to the production of dyes.
Conversion to imines
Imine formation is an important reaction. Primary amines react with ketones and aldehydes to form imines. In the case of formaldehyde (R' H), these products typically exist as cyclic trimers.
RNH2 + R'_2C=O -> R'_2C=NR + H2O
Reduction of these imines gives secondary amines:
R'_2C=NR + H2 -> R'_2CH-NHR
Similarly, secondary amines react with ketones and aldehydes to form enamines:
R2NH + R'(R''CH2)C=O -> R''CH=C(NR2)R' + H2O
Overview
An overview of the reactions of amines is given below:
Biological activity
Amines are ubiquitous in biology. The breakdown of amino acids releases amines, famously in the case of decaying fish which smell of trimethylamine. Many neurotransmitters are amines, including epinephrine, norepinephrine, dopamine, serotonin, and histamine. Protonated amino groups () are the most common positively charged moieties in proteins, specifically in the amino acid lysine. The anionic polymer DNA is typically bound to various amine-rich proteins. Additionally, the terminal charged primary ammonium on lysine forms salt bridges with carboxylate groups of other amino acids in polypeptides, which is one of the primary influences on the three-dimensional structures of proteins.
Amine hormones
Hormones derived from the modification of amino acids are referred to as amine hormones. Typically, the original structure of the amino acid is modified such that a –COOH, or carboxyl, group is removed, whereas the −NH+3, or amine, group remains. Amine hormones are synthesized from the amino acids tryptophan or tyrosine.
Application of amines
Dyes
Primary aromatic amines are used as a starting material for the manufacture of azo dyes. It reacts with nitrous acid to form diazonium salt, which can undergo coupling reaction to form an azo compound. As azo-compounds are highly coloured, they are widely used in dyeing industries, such as:
Methyl orange
Direct brown 138
Sunset yellow FCF
Ponceau
Drugs
Most drugs and drug candidates contain amine functional groups:
Chlorpheniramine is an antihistamine that helps to relieve allergic disorders due to cold, hay fever, itchy skin, insect bites and stings.
Chlorpromazine is a tranquilizer that sedates without inducing sleep. It is used to relieve anxiety, excitement, restlessness or even mental disorder.
Ephedrine and phenylephrine, as amine hydrochlorides, are used as decongestants.
Amphetamine, methamphetamine, and methcathinone are psychostimulant amines that are listed as controlled substances by the US DEA.
Thioridazine, an antipsychotic drug, is an amide which is believed to exhibit its antipsychotic effects, in part, due to its effects on other amides.
Amitriptyline, imipramine, lofepramine and clomipramine are tricyclic antidepressants and tertiary amines.
Nortriptyline, desipramine, and amoxapine are tricyclic antidepressants and secondary amines. (The tricyclics are grouped by the nature of the final amino group on the side chain.)
Substituted tryptamines and phenethylamines are key basic structures for a large variety of psychedelic drugs.
Opiate analgesics such as morphine, codeine, and heroin are tertiary amines.
Gas treatment
Aqueous monoethanolamine (MEA), diglycolamine (DGA), diethanolamine (DEA), diisopropanolamine (DIPA) and methyldiethanolamine (MDEA) are widely used industrially for removing carbon dioxide (CO2) and hydrogen sulfide (H2S) from natural gas and refinery process streams. They may also be used to remove CO2 from combustion gases and flue gases and may have potential for abatement of greenhouse gases. Related processes are known as sweetening.
Epoxy resin curing agents
Amines are often used as epoxy resin curing agents. These include dimethylethylamine, cyclohexylamine, and a variety of diamines such as 4,4 -diaminodicyclohexylmethane. Multifunctional amines such as tetraethylenepentamine and triethylenetetramine are also widely used in this capacity. The reaction proceeds by the lone pair of electrons on the amine nitrogen attacking the outermost carbon on the oxirane ring of the epoxy resin. This relieves ring strain on the epoxide and is the driving force of the reaction.
Safety
Low molecular weight simple amines, such as ethylamine, are only weakly toxic with between 100 and 1000 mg/kg. They are skin irritants, especially as some are easily absorbed through the skin. Amines are a broad class of compounds, and more complex members of the class can be extremely bioactive, for example strychnine.
See also
Acid-base extraction
Amine value
Amine gas treating
Ammine
Biogenic amine
Ligand isomerism
Official naming rules for amines as determined by the International Union of Pure and Applied Chemistry (IUPAC)
References
Further reading
External links
Synthesis of amines
Factsheet, amines in food
Functional groups
|
https://en.wikipedia.org/wiki/Aloe
|
Aloe (; also written Aloë) is a genus containing over 650 species of flowering succulent plants. The most widely known species is Aloe vera, or "true aloe". It is called this because it is cultivated as the standard source for assorted pharmaceutical purposes. Other species, such as Aloe ferox, are also cultivated or harvested from the wild for similar applications.
The APG IV system (2016) places the genus in the family Asphodelaceae, subfamily Asphodeloideae. Within the subfamily it may be placed in the tribe Aloeae. In the past, it has been assigned to the family Aloaceae (now included in the Asphodeloidae) or to a broadly circumscribed family Liliaceae (the lily family). The plant Agave americana, which is sometimes called "American aloe", belongs to the Asparagaceae, a different family.
The genus is native to tropical and southern Africa, Madagascar, Jordan, the Arabian Peninsula, and various islands in the Indian Ocean (Mauritius, Réunion, Comoros, etc.). A few species have also become naturalized in other regions (Mediterranean, India, Australia, North and South America, Hawaiian Islands, etc.).
Etymology
The genus name Aloe is derived from the Arabic word alloeh, meaning "bitter and shiny substance" or from Hebrew ahalim, plural of ahal.
Description
Most Aloe species have a rosette of large, thick, fleshy leaves. Aloe flowers are tubular, frequently yellow, orange, pink, or red, and are borne, densely clustered and pendant, at the apex of simple or branched, leafless stems. Many species of Aloe appear to be stemless, with the rosette growing directly at ground level; other varieties may have a branched or unbranched stem from which the fleshy leaves spring. They vary in color from grey to bright-green and are sometimes striped or mottled. Some aloes native to South Africa are tree-like (arborescent).
Systematics
The APG IV system (2016) places the genus in the family Asphodelaceae, subfamily Asphodeloideae. In the past it has also been assigned to the families Liliaceae and Aloeaceae, as well as the family Asphodelaceae sensu stricto, before this was merged into the Asphodelaceae sensu lato.
The circumscription of the genus has varied widely. Many genera, such as Lomatophyllum, have been brought into synonymy. Species at one time placed in Aloe, such as Agave americana, have been moved to other genera. Molecular phylogenetic studies, particularly from 2010 onwards, suggested that as then circumscribed, Aloe was not monophyletic and should be divided into more tightly defined genera. In 2014, John Charles Manning and coworkers produced a phylogeny in which Aloe was divided into six genera: Aloidendron, Kumara, Aloiampelos, Aloe, Aristaloe and Gonialoe.
Species
Over 600 species are accepted in the genus Aloe, plus even more synonyms and unresolved species, subspecies, varieties, and hybrids. Some of the accepted species are:
Aloe aculeata Pole-Evans
Aloe africana Mill.
Aloe albida (Stapf) Reynolds
Aloe albiflora Guillaumin
Aloe arborescens Mill.
Aloe arenicola Reynolds
Aloe argenticauda Merxm. & Giess
Aloe bakeri Scott-Elliot
Aloe ballii Reynolds
Aloe ballyi Reynolds
Aloe brevifolia Mill.
Aloe broomii Schönland
Aloe buettneri A.Berger
Aloe camperi Schweinf.
Aloe capitata Baker
Aloe comosa Marloth & A.Berger
Aloe cooperi Baker
Aloe corallina Verd.
Aloe dewinteri Giess ex Borman & Hardy
Aloe erinacea D.S.Hardy
Aloe excelsa A.Berger
Aloe ferox Mill.
Aloe forbesii Balf.f.
Aloe helenae Danguy
Aloe hereroensis Engl.
Aloe inermis Forssk.
Aloe inyangensis Christian
Aloe jawiyon S.J.Christie, D.P.Hannon & Oakman ex A.G.Mill.
Aloe jucunda Reynolds
Aloe khamiesensis Pillans
Aloe kilifiensis Christian
Aloe maculata All.
Aloe marlothii A.Berger
Aloe mubendiensis Christian
Aloe namibensis Giess
Aloe nyeriensis Christian & I.Verd.
Aloe pearsonii Schönland
Aloe peglerae Schönland
Aloe perfoliata L.
Aloe perryi Baker
Aloe petricola Pole-Evans
Aloe polyphylla Pillans
Aloe rauhii Reynolds
Aloe reynoldsii Letty
Aloe scobinifolia Reynolds & Bally
Aloe sinkatana Reynolds
Aloe squarrosa Baker ex Balf.f.
Aloe striata Haw.
Aloe succotrina Lam.
Aloe suzannae Decary
Aloe thraskii Baker
Aloe vera (L.) Burm.f.
Aloe viridiflora Reynolds
Aloe wildii (Reynolds) Reynolds
In addition to the species and hybrids between species within the genus, several hybrids with other genera have been created in cultivation, such as between Aloe and Gasteria (× Gasteraloe), and between Aloe and Astroloba (×Aloloba).
Uses
Aloe species are frequently cultivated as ornamental plants both in gardens and in pots. Many aloe species are highly decorative and are valued by collectors of succulents. Aloe vera is used both internally and externally on humans as folk or alternative medicine. The Aloe species is known for its medicinal and cosmetic properties. Around 75% of Aloe species are used locally for medicinal uses. The plants can also be made into types of special soaps or used in other skin care products (see natural skin care).
Numerous cultivars with mixed or uncertain parentage are grown. Of these, Aloe ‘Lizard Lips’ has gained the Royal Horticultural Society’s Award of Garden Merit.
Aloe variegata has been planted on graves in the superstitious belief that this ensures eternal life.
Historical uses
Historical use of various aloe species is well documented. Documentation of the clinical effectiveness is available, although relatively limited.
Of the 500+ species, only a few were used traditionally as herbal medicines, Aloe vera again being the most commonly used species. Also included are A. perryi and A. ferox. The Ancient Greeks and Romans used Aloe vera to treat wounds. In the Middle Ages, the yellowish liquid found inside the leaves was favored as a purgative. Unprocessed aloe that contains aloin is generally used as a laxative, whereas processed juice does not usually contain significant aloin.
Some species, particularly Aloe vera, are used in alternative medicine and first aid. Both the translucent inner pulp and the resinous yellow aloin from wounding the aloe plant are used externally for skin discomforts. As an herbal medicine, Aloe vera juice is commonly used internally for digestive discomfort.
According to Cancer Research UK, a potentially deadly product called T-UP is made of concentrated aloe, and promoted as a cancer cure. They say "there is currently no evidence that aloe products can help to prevent or treat cancer in humans".
Aloin in OTC laxative products
On May 9, 2002, the US Food and Drug Administration issued a final rule banning the use of aloin, the yellow sap of the aloe plant, for use as a laxative ingredient in over-the-counter drug products. Most aloe juices today do not contain significant aloin.
Chemical properties
According to W. A. Shenstone, two classes of aloins are recognized: (1) nataloins, which yield picric and oxalic acids with nitric acid, and do not give a red coloration with nitric acid; and (2) barbaloins, which yield aloetic acid (C7H2N3O5), chrysammic acid (C7H2N2O6), picric and oxalic acids with nitric acid, being reddened by the acid. This second group may be divided into a-barbaloins, obtained from Barbados Aloe, and reddened in the cold, and b-barbaloins, obtained from Aloe Socotrina and Zanzibar Aloe, reddened by ordinary nitric acid only when warmed or by fuming acid in the cold. Nataloin (2C17H13O7·H2O) forms bright-yellow scales, barbaloin (C17H18O7) prismatic crystals. Aloe species are used in essential oils as a safety measure to dilute the solution before they are applied to the skin.
Flavoring
Aloe perryi, A. barbadensis, A. ferox, and hybrids of this species with A. africana and A. spicata are listed as natural flavoring substances in the US government Electronic Code of Federal Regulations. Aloe socotrina is said to be used in yellow Chartreuse.
Heraldic occurrence
Aloe rubrolutea occurs as a charge in heraldry, for example in the Civic Heraldry of Namibia.
Gallery
See also
List of Aloe species
List of ineffective cancer treatments
List of Southern African indigenous trees
References
Further reading
External links
Asphodelaceae genera
Laxatives
Cosmetics chemicals
Succulent plants
|
https://en.wikipedia.org/wiki/Ambergris
|
Ambergris ( or , , ), ambergrease, or grey amber is a solid, waxy, flammable substance of a dull grey or blackish colour produced in the digestive system of sperm whales. Freshly produced ambergris has a marine, fecal odor. It acquires a sweet, earthy scent as it ages, commonly likened to the fragrance of isopropyl alcohol without the vaporous chemical astringency.
Ambergris has been highly valued by perfume makers as a fixative that allows the scent to last much longer, although it has been mostly replaced by synthetic ambroxide. Dogs are attracted to the smell of ambergris and are sometimes used by ambergris searchers.
Etymology
The English word amber derives from the Arabic word (ultimately from Middle Persian ambar, also ambergris), via Middle Latin ambar and Middle French ambre. The word "amber," in its sense of "ambergris," was adopted in Middle English in the 14th century.
The word "ambergris" comes from the Old French "ambre gris" or "grey amber". The addition of "grey" came about when, in the Romance languages, the sense of the word "amber" was extended to Baltic amber (fossil resin), as white or yellow amber (ambre jaune), from as early as the late 13th century. This fossilized resin became the dominant (and now exclusive) sense of "amber", leaving "ambergris" as the word for the whale secretion.
The archaic alternate spelling "ambergrease" arose as an eggcorn from the phonetic pronunciation of "ambergris," encouraged by the substance's waxy texture.
Formation
Ambergris is formed from a secretion of the bile duct in the intestines of the sperm whale, and can be found floating on the sea or washed up on coastlines. It is sometimes found in the abdomens of dead sperm whales. Because the beaks of giant squids have been discovered within lumps of ambergris, scientists have theorized that the substance is produced by the whale's gastrointestinal tract to ease the passage of hard, sharp objects that it may have eaten.
Ambergris is passed like fecal matter. It is speculated that an ambergris mass too large to be passed through the intestines is expelled via the mouth, but this remains under debate. Another theory states that an ambergris mass is formed when the colon of a whale is enlarged by a blockage from intestinal worms and cephalopod parts resulting in the death of the whale and the mass being excreted into the sea. Ambergris takes years to form. Christopher Kemp, the author of Floating Gold: A Natural (and Unnatural) History of Ambergris, says that it is only produced by sperm whales, and only by an estimated one percent of them. Ambergris is rare; once expelled by a whale, it often floats for years before making landfall. The slim chances of finding ambergris and the legal ambiguity involved led perfume makers away from ambergris, and led chemists on a quest to find viable alternatives.
Ambergris is found primarily in the Atlantic Ocean and on the coasts of South Africa; Brazil; Madagascar; the East Indies; The Maldives; China; Japan; India; Australia; New Zealand; and the Molucca Islands. Most commercially collected ambergris comes from The Bahamas in the Atlantic, particularly New Providence. In 2021, fishermen found a 127 kg (280-pound) piece of ambergris off the coast of Yemen, valued at US$1.5 million. Fossilised ambergris from 1.75 million years ago has also been found.
Physical properties
Ambergris is found in lumps of various shapes and sizes, usually weighing from to or more. When initially expelled by or removed from the whale, the fatty precursor of ambergris is pale white in color (sometimes streaked with black), soft, with a strong fecal smell. Following months to years of photodegradation and oxidation in the ocean, this precursor gradually hardens, developing a dark grey or black color, a crusty and waxy texture, and a peculiar odor that is at once sweet, earthy, marine, and animalic. Its scent has been generally described as a vastly richer and smoother version of isopropanol without its stinging harshness. In this developed condition, ambergris has a specific gravity ranging from 0.780 to 0.926 (meaning it floats in water). It melts at about to a fatty, yellow resinous liquid; and at it is volatilised into a white vapor. It is soluble in ether, and in volatile and fixed oils.
Chemical properties
Ambergris is relatively nonreactive to acid. White crystals of a terpenoid known as ambrein, discovered by Ružička and Fernand Lardon in 1946, can be separated from ambergris by heating raw ambergris in alcohol, then allowing the resulting solution to cool. Breakdown of the relatively scentless ambrein through oxidation produces ambroxide and ambrinol, the main odor components of ambergris.
Ambroxide is now produced synthetically and used extensively in the perfume industry.
Applications
Ambergris has been mostly known for its use in creating perfume and fragrance much like musk. Perfumes can still be found with ambergris.
Ambergris has historically been used in food and drink. A serving of eggs and ambergris was reportedly King Charles II of England's favorite dish. A recipe for Rum Shrub liqueur from the mid 19th century called for a thread of ambergris to be added to rum, almonds, cloves, cassia, and the peel of oranges in making a cocktail from The English and Australian Cookery Book. It has been used as a flavoring agent in Turkish coffee and in hot chocolate in 18th century Europe. The substance is considered an aphrodisiac in some cultures.
Ancient Egyptians burned ambergris as incense, while in modern Egypt ambergris is used for scenting cigarettes. The ancient Chinese called the substance "dragon's spittle fragrance". During the Black Death in Europe, people believed that carrying a ball of ambergris could help prevent them from contracting plague. This was because the fragrance covered the smell of the air which was believed to be a cause of plague.
During the Middle Ages, Europeans used ambergris as a medication for headaches, colds, epilepsy, and other ailments.
Legality
From the 18th to the mid-19th century, the whaling industry prospered. By some reports, nearly 50,000 whales, including sperm whales, were killed each year. Throughout the 1800s, "millions of whales were killed for their oil, whalebone, and ambergris" to fuel profits, and they soon became endangered as a species as a result. Due to studies showing that the whale populations were being threatened, the International Whaling Commission instituted a moratorium on commercial whaling in 1982. Although ambergris is not harvested from whales, many countries also ban the trade of ambergris as part of the more general ban on the hunting and exploitation of whales.
Urine, faeces, and ambergris (that has been naturally excreted by a sperm whale) are waste products not considered parts or derivatives of a CITES species and are therefore not covered by the provisions of the convention.
Illegal
Australia – Under federal law, the export and import of ambergris for commercial purposes is banned by the Environment Protection and Biodiversity Conservation Act 1999. The various states and territories have additional laws regarding ambergris.
United States – The possession and trade of ambergris is prohibited by the Endangered Species Act of 1973.
India – Sale or possession is illegal under the Wild Life (Protection) Act, 1972.
Legal
United Kingdom
France
Switzerland
Maldives
References
Further reading
montalvoeascinciasdonossotempo.blogspot, accessed 21 August 2015
External links
Natural History Magazine Article (from 1933): Floating Gold – The Romance of Ambergris
Ambergris – A Pathfinder and Annotated Bibliography
On the chemistry and ethics of Ambergris
Pathologist finds €500,000 ‘floating gold’ in dead whale in Canary Islands
Perfume ingredients
Whale products
Animal glandular products
Natural products
Traditional medicine
|
https://en.wikipedia.org/wiki/Arthritis
|
Arthritis is a term often used to mean any disorder that affects joints. Symptoms generally include joint pain and stiffness. Other symptoms may include redness, warmth, swelling, and decreased range of motion of the affected joints. In some types of arthritis, other organs are also affected. Onset can be gradual or sudden.
There are over 100 types of arthritis. The most common forms are osteoarthritis (degenerative joint disease) and rheumatoid arthritis. Osteoarthritis usually occurs with age and affects the fingers, knees, and hips. Rheumatoid arthritis is an autoimmune disorder that often affects the hands and feet. Other types include gout, lupus, fibromyalgia, and septic arthritis. They are all types of rheumatic disease.
Treatment may include resting the joint and alternating between applying ice and heat. Weight loss and exercise may also be useful. Recommended medications may depend on the form of arthritis. These may include pain medications such as ibuprofen and paracetamol (acetaminophen). In some circumstances, a joint replacement may be useful.
Osteoarthritis affects more than 3.8% of people, while rheumatoid arthritis affects about 0.24% of people. Gout affects about 1–2% of the Western population at some point in their lives. In Australia about 15% of people are affected by arthritis, while in the United States more than 20% have a type of arthritis. Overall the disease becomes more common with age. Arthritis is a common reason that people miss work and can result in a decreased quality of life. The term is derived from arthr- (meaning 'joint') and -itis (meaning 'inflammation').
Classification
There are several diseases where joint pain is primary, and is considered the main feature. Generally when a person has "arthritis" it means that they have one of these diseases, which include:
Hemarthrosis
Osteoarthritis
Rheumatoid arthritis
Gout and pseudo-gout
Septic arthritis
Ankylosing spondylitis
Juvenile idiopathic arthritis
Still's disease
Psoriatic arthritis
Joint pain can also be a symptom of other diseases. In this case, the arthritis is considered to be secondary to the main disease; these include:
Psoriasis
Reactive arthritis
Ehlers–Danlos syndrome
Iron overload
Hepatitis
Lyme disease
Sjögren's disease
Hashimoto's thyroiditis
Celiac disease
Non-celiac gluten sensitivity
Inflammatory bowel disease (including Crohn's disease and ulcerative colitis)
Henoch–Schönlein purpura
Hyperimmunoglobulinemia D with recurrent fever
Sarcoidosis
Whipple's disease
TNF receptor associated periodic syndrome
Granulomatosis with polyangiitis (and many other vasculitis syndromes)
Familial Mediterranean fever
Systemic lupus erythematosus
An undifferentiated arthritis is an arthritis that does not fit into well-known clinical disease categories, possibly being an early stage of a definite rheumatic disease.
Signs and symptoms
Pain, which can vary in severity, is a common symptom in virtually all types of arthritis. Other symptoms include swelling, joint stiffness, redness, and aching around the joint(s). Arthritic disorders like lupus and rheumatoid arthritis can affect other organs in the body, leading to a variety of symptoms. Symptoms may include:
Inability to use the hand or walk
Stiffness in one or more joints
Rash or itch
Malaise and fatigue
Weight loss
Poor sleep
Muscle aches and pains
Tenderness
Difficulty moving the joint
It is common in advanced arthritis for significant secondary changes to occur. For example, arthritic symptoms might make it difficult for a person to move around and/or exercise, which can lead to secondary effects, such as:
Muscle weakness
Loss of flexibility
Decreased aerobic fitness
These changes, in addition to the primary symptoms, can have a huge impact on quality of life.
Disability
Arthritis is the most common cause of disability in the United States. More than 20 million individuals with arthritis have severe limitations in function on a daily basis. Absenteeism and frequent visits to the physician are common in individuals who have arthritis. Arthritis can make it difficult for individuals to be physically active and some become home bound.
It is estimated that the total cost of arthritis cases is close to $100 billion of which almost 50% is from lost earnings. Each year, arthritis results in nearly 1 million hospitalizations and close to 45 million outpatient visits to health care centers.
Decreased mobility, in combination with the above symptoms, can make it difficult for an individual to remain physically active, contributing to an increased risk of obesity, high cholesterol or vulnerability to heart disease. People with arthritis are also at increased risk of depression, which may be a response to numerous factors, including fear of worsening symptoms.
Risk factors
There are common risk factors that increase a person's chance of developing arthritis later in adulthood. Some of these are modifiable while others are not. Smoking has been linked to an increased susceptibility of developing arthritis, particularly rheumatoid arthritis.
Diagnosis
Diagnosis is made by clinical examination from an appropriate health professional, and may be supported by other tests such as radiology and blood tests, depending on the type of suspected arthritis. All arthritides potentially feature pain. Pain patterns may differ depending on the arthritides and the location. Rheumatoid arthritis is generally worse in the morning and associated with stiffness lasting over 30 minutes. However, in the early stages, patients may have no symptoms after a warm shower. Osteoarthritis, on the other hand, tends to be associated with morning stiffness which eases relatively quickly with movement and exercise. In the aged and children, pain might not be the main presenting feature; the aged patient simply moves less, the infantile patient refuses to use the affected limb.
Elements of the history of the disorder guide diagnosis. Important features are speed and time of onset, pattern of joint involvement, symmetry of symptoms, early morning stiffness, tenderness, gelling or locking with inactivity, aggravating and relieving factors, and other systemic symptoms. It may include checking joints, observing movements, examination of skin for rashes or nodules and symptoms of pulmonary inflammation. Physical examination may confirm the diagnosis or may indicate systemic disease. Radiographs are often used to follow progression or help assess severity.
Blood tests and X-rays of the affected joints often are performed to make the diagnosis. Screening blood tests are indicated if certain arthritides are suspected. These might include: rheumatoid factor, antinuclear factor (ANF), extractable nuclear antigen, and specific antibodies.
Rheumatoid arthritis patients often have high erythrocyte sedimentation rate (ESR, also known as sed rate) or C-reactive protein (CRP) levels, which indicates the presence of an inflammatory process in the body. Anti-cyclic citrullinated peptide (anti-CCP) antibodies and rheumatoid factor (RF) are two more common blood tests. Positive results indicate the risk of rheumatoid arthritis, while negative results help rule out this autoimmune condition.
Imaging tests like X-rays, MRI scans or Ultrasounds used to diagnose and monitor arthritis. Other imaging tests for rheumatoid arthritis that may be considered include computed tomography (CT) scanning, positron emission tomography (PET) scanning, bone scanning, and dual-energy X-ray absorptiometry (DEXA).
Osteoarthritis
Osteoarthritis is the most common form of arthritis. It affects humans and other animals, notably dogs, but also occurs in cats and horses. It can affect both the larger and the smaller joints of the body. In humans, this includes the hands, wrists, feet, back, hip, and knee. In dogs, this includes the elbow, hip, stifle (knee), shoulder, and back. The disease is essentially one acquired from daily wear and tear of the joint; however, osteoarthritis can also occur as a result of injury. Osteoarthritis begins in the cartilage and eventually causes the two opposing bones to erode into each other. The condition starts with minor pain during physical activity, but soon the pain can be continuous and even occur while in a state of rest. The pain can be debilitating and prevent one from doing some activities. In dogs, this pain can significantly affect quality of life and may include difficulty going up and down stairs, struggling to get up after lying down, trouble walking on slick floors, being unable to hop in and out of vehicles, difficulty jumping on and off furniture, and behavioral changes (e.g., aggression, difficulty squatting to toilet). Osteoarthritis typically affects the weight-bearing joints, such as the back, knee and hip. Unlike rheumatoid arthritis, osteoarthritis is most commonly a disease of the elderly. The strongest predictor of osteoarthritis is increased age, likely due to the declining ability of chondrocytes to maintain the structural integrity of cartilage. More than 30 percent of women have some degree of osteoarthritis by age 65. Other risk factors for osteoarthritis include prior joint trauma, obesity, and a sedentary lifestyle.
Rheumatoid arthritis
Rheumatoid arthritis (RA) is a disorder in which the body's own immune system starts to attack body tissues. The attack is not only directed at the joint but to many other parts of the body. In rheumatoid arthritis, most damage occurs to the joint lining and cartilage which eventually results in erosion of two opposing bones. RA often affects joints in the fingers, wrists, knees and elbows, is symmetrical (appears on both sides of the body), and can lead to severe deformity in a few years if not treated. RA occurs mostly in people aged 20 and above. In children, the disorder can present with a skin rash, fever, pain, disability, and limitations in daily activities. With earlier diagnosis and aggressive treatment, many individuals can lead a better quality of life than if going undiagnosed for long after RA's onset. The risk factors with the strongest association for developing rheumatoid arthritis are the female sex, a family history of rheumatoid arthritis, age, obesity, previous joint damage from an injury, and exposure to tobacco smoke.
Bone erosion is a central feature of rheumatoid arthritis. Bone continuously undergoes remodeling by actions of bone resorbing osteoclasts and bone forming osteoblasts. One of the main triggers of bone erosion in the joints in rheumatoid arthritis is inflammation of the synovium, caused in part by the production of pro-inflammatory cytokines and receptor activator of nuclear factor kappa B ligand (RANKL), a cell surface protein present in Th17 cells and osteoblasts. Osteoclast activity can be directly induced by osteoblasts through the RANK/RANKL mechanism.
Lupus
Lupus is a common collagen vascular disorder that can be present with severe arthritis. Other features of lupus include a skin rash, extreme photosensitivity, hair loss, kidney problems, lung fibrosis and constant joint pain.
Gout
Gout is caused by deposition of uric acid crystals in the joints, causing inflammation. There is also an uncommon form of gouty arthritis caused by the formation of rhomboid crystals of calcium pyrophosphate known as pseudogout. In the early stages, the gouty arthritis usually occurs in one joint, but with time, it can occur in many joints and be quite crippling. The joints in gout can often become swollen and lose function. Gouty arthritis can become particularly painful and potentially debilitating when gout cannot successfully be treated. When uric acid levels and gout symptoms cannot be controlled with standard gout medicines that decrease the production of uric acid (e.g., allopurinol) or increase uric acid elimination from the body through the kidneys (e.g., probenecid), this can be referred to as refractory chronic gout.
Comparison of types
Other
Infectious arthritis is another severe form of arthritis. It presents with sudden onset of chills, fever and joint pain. The condition is caused by bacteria elsewhere in the body. Infectious arthritis must be rapidly diagnosed and treated promptly to prevent irreversible joint damage.
Psoriasis can develop into psoriatic arthritis. With psoriatic arthritis, most individuals develop the skin problem first and then the arthritis. The typical features are continuous joint pains, stiffness and swelling. The disease does recur with periods of remission but there is no cure for the disorder. A small percentage develop a severely painful and destructive form of arthritis which destroys the small joints in the hands and can lead to permanent disability and loss of hand function.
Treatment
There is no known cure for arthritis and rheumatic diseases. Treatment options vary depending on the type of arthritis and include physical therapy, exercise and diet, orthopedic bracing, and oral and topical medications. Joint replacement surgery may be required to repair damage, restore function, or relieve pain.
Physical therapy
In general, studies have shown that physical exercise of the affected joint can noticeably improve long-term pain relief. Furthermore, exercise of the arthritic joint is encouraged to maintain the health of the particular joint and the overall body of the person.
Individuals with arthritis can benefit from both physical and occupational therapy. In arthritis the joints become stiff and the range of movement can be limited. Physical therapy has been shown to significantly improve function, decrease pain, and delay the need for surgical intervention in advanced cases. Exercise prescribed by a physical therapist has been shown to be more effective than medications in treating osteoarthritis of the knee. Exercise often focuses on improving muscle strength, endurance and flexibility. In some cases, exercises may be designed to train balance. Occupational therapy can provide assistance with activities. Assistive technology is a tool used to aid a person's disability by reducing their physical barriers by improving the use of their damaged body part, typically after an amputation. Assistive technology devices can be customized to the patient or bought commercially.
Medications
There are several types of medications that are used for the treatment of arthritis. Treatment typically begins with medications that have the fewest side effects with further medications being added if insufficiently effective.
Depending on the type of arthritis, the medications that are given may be different. For example, the first-line treatment for osteoarthritis is acetaminophen (paracetamol) while for inflammatory arthritis it involves non-steroidal anti-inflammatory drugs (NSAIDs) like ibuprofen. Opioids and NSAIDs may be less well tolerated. However, topical NSAIDs may have better safety profiles than oral NSAIDs. For more severe cases of osteoarthritis, intra-articular corticosteroid injections may also be considered.
The drugs to treat rheumatoid arthritis (RA) range from corticosteroids to monoclonal antibodies given intravenously. Due to the autoimmune nature of RA, treatments may include not only pain medications and anti-inflammatory drugs, but also another category of drugs called disease-modifying antirheumatic drugs (DMARDs). csDMARDs, TNF biologics and tsDMARDs are specific kinds of DMARDs that are recommended for treatment. Treatment with DMARDs is designed to slow down the progression of RA by initiating an adaptive immune response, in part by CD4+ T helper (Th) cells, specifically Th17 cells. Th17 cells are present in higher quantities at the site of bone destruction in joints and produce inflammatory cytokines associated with inflammation, such as interleukin-17 (IL-17).
Surgery
A number of rheumasurgical interventions have been incorporated in the treatment of arthritis since the 1950s. Arthroscopic surgery for osteoarthritis of the knee provides no additional benefit to optimized physical and medical therapy.
Adaptive aids
People with hand arthritis can have trouble with simple activities of daily living tasks (ADLs), such as turning a key in a lock or opening jars, as these activities can be cumbersome and painful. There are adaptive aids or assistive devices (ADs) available to help with these tasks, but they are generally more costly than conventional products with the same function. It is now possible to 3-D print adaptive aids, which have been released as open source hardware to reduce patient costs. Adaptive aids can significantly help arthritis patients and the vast majority of those with arthritis need and use them.
Alternative medicine
Further research is required to determine if transcutaneous electrical nerve stimulation (TENS) for knee osteoarthritis is effective for controlling pain.
Low level laser therapy may be considered for relief of pain and stiffness associated with arthritis. Evidence of benefit is tentative.
Pulsed electromagnetic field therapy (PEMFT) has tentative evidence supporting improved functioning but no evidence of improved pain in osteoarthritis. The FDA has not approved PEMFT for the treatment of arthritis. In Canada, PEMF devices are legally licensed by Health Canada for the treatment of pain associated with arthritic conditions.
Epidemiology
Arthritis is predominantly a disease of the elderly, but children can also be affected by the disease. Arthritis is more common in women than men at all ages and affects all races, ethnic groups and cultures. In the United States a CDC survey based on data from 2013 to 2015 showed 54.4 million (22.7%) adults had self-reported doctor-diagnosed arthritis, and 23.7 million (43.5% of those with arthritis) had arthritis-attributable activity limitation (AAAL). With an aging population, this number is expected to increase. Adults with co-morbid conditions, such as heart disease, diabetes, and obesity, were seen to have a higher than average prevalence of doctor-diagnosed arthritis (49.3%, 47.1%, and 30.6% respectively).
Disability due to musculoskeletal disorders increased by 45% from 1990 to 2010. Of these, osteoarthritis is the fastest increasing major health condition. Among the many reports on the increased prevalence of musculoskeletal conditions, data from Africa are lacking and underestimated. A systematic review assessed the prevalence of arthritis in Africa and included twenty population-based and seven hospital-based studies. The majority of studies, twelve, were from South Africa. Nine studies were well-conducted, eleven studies were of moderate quality, and seven studies were conducted poorly. The results of the systematic review were as follows:
Rheumatoid arthritis: 0.1% in Algeria (urban setting); 0.6% in Democratic Republic of Congo (urban setting); 2.5% and 0.07% in urban and rural settings in South Africa respectively; 0.3% in Egypt (rural setting), 0.4% in Lesotho (rural setting)
Osteoarthritis: 55.1% in South Africa (urban setting); ranged from 29.5 to 82.7% in South Africans aged 65 years and older
Knee osteoarthritis has the highest prevalence from all types of osteoarthritis, with 33.1% in rural South Africa
Ankylosing spondylitis: 0.1% in South Africa (rural setting)
Psoriatic arthritis: 4.4% in South Africa (urban setting)
Gout: 0.7% in South Africa (urban setting)
Juvenile idiopathic arthritis: 0.3% in Egypt (urban setting)
History
Evidence of osteoarthritis and potentially inflammatory arthritis has been discovered in dinosaurs. The first known traces of human arthritis date back as far as 4500 BC. In early reports, arthritis was frequently referred to as the most common ailment of prehistoric peoples. It was noted in skeletal remains of Native Americans found in Tennessee and parts of what is now Olathe, Kansas. Evidence of arthritis has been found throughout history, from Ötzi, a mummy () found along the border of modern Italy and Austria, to the Egyptian mummies .
In 1715, William Musgrave published the second edition of his most important medical work, De arthritide symptomatica, which concerned arthritis and its effects. Augustin Jacob Landré-Beauvais, a 28-year-old resident physician at Salpêtrière Asylum in France was the first person to describe the symptoms of rheumatoid arthritis. Though Landré-Beauvais' classification of rheumatoid arthritis as a relative of gout was inaccurate, his dissertation encouraged others to further study the disease.
Terminology
The term is derived from arthr- (from ) and -itis (from , , ), the latter suffix having come to be associated with inflammation.
The word arthritides is the plural form of arthritis, and denotes the collective group of arthritis-like conditions.
See also
Antiarthritics
Arthritis Care (charity in the UK)
Arthritis Foundation (US not-for-profit)
Knee arthritis
Osteoimmunology
Weather pains
References
External links
American College of Rheumatology – US professional society of rheumatologists
National Institute of Arthritis and Musculoskeletal and Skin Diseases - US National Institute of Arthritis and Musculoskeletal and Skin Diseases
Aging-associated diseases
Inflammations
Rheumatology
Wikipedia neurology articles ready to translate
Skeletal disorders
Wikipedia medicine articles ready to translate
|
https://en.wikipedia.org/wiki/Acetylene
|
Acetylene (systematic name: ethyne) is the chemical compound with the formula and structure . It is a hydrocarbon and the simplest alkyne. This colorless gas is widely used as a fuel and a chemical building block. It is unstable in its pure form and thus is usually handled as a solution. Pure acetylene is odorless, but commercial grades usually have a marked odor due to impurities such as divinyl sulfide and phosphine.
As an alkyne, acetylene is unsaturated because its two carbon atoms are bonded together in a triple bond. The carbon–carbon triple bond places all four atoms in the same straight line, with CCH bond angles of 180°.
Discovery
Acetylene was discovered in 1836 by Edmund Davy, who identified it as a "new carburet of hydrogen". It was an accidental discovery while attempting to isolate potassium metal. By heating potassium carbonate with carbon at very high temperatures, he produced a residue of what is now known as potassium carbide, (K2C2), which reacted with water to release the new gas. It was rediscovered in 1860 by French chemist Marcellin Berthelot, who coined the name acétylène. Berthelot's empirical formula for acetylene (C4H2), as well as the alternative name "quadricarbure d'hydrogène" (hydrogen quadricarbide), were incorrect because many chemists at that time used the wrong atomic mass for carbon (6 instead of 12). Berthelot was able to prepare this gas by passing vapours of organic compounds (methanol, ethanol, etc.) through a red hot tube and collecting the effluent. He also found that acetylene was formed by sparking electricity through mixed cyanogen and hydrogen gases. Berthelot later obtained acetylene directly by passing hydrogen between the poles of a carbon arc.
Preparation
Except for China acetylene production is dominated by partial combustion of natural gas.
Partial combustion of hydrocarbons
Since the 1950s, acetylene has mainly been manufactured by the partial combustion of methane. It is a recovered side product in production of ethylene by cracking of hydrocarbons. Approximately 400,000 tonnes were produced by this method in 1983. Its presence in ethylene is usually undesirable because of its explosive character and its ability to poison Ziegler–Natta catalysts. It is selectively hydrogenated into ethylene, usually using Pd–Ag catalysts.
3 CH4 + 3 O2 → C2H2 + CO + 5 H2O.
Partial combustion of methane also produces acetylene:
Dehydrogenation of alkanes
The heaviest alkanes in petroleum and natural gas are cracked into lighter molecules which are dehydrogenated at high temperature:
C2H6 → C2H2 + 2 H2
2 CH4→ C2H2+ 3 H2
This last reaction is implemented in the process of anaerobic decomposition of methane by microwave plasma. The advantage of this technology is the absence of CO2 emissions and the joint production of hydrogen as a secondary product. It makes it a low-carbon technology production and also an electrified process. For 32 t of methane transformed, production of 26 t of acetylene and 6 t of hydrogen (according to stoichiometry).
Carbochemical method
The production of acetylene from calcium carbide is a traditional and still the dominant route:
The conditions for production of calcium carbide are environmentally unacceptable in most advanced countries, except China.
Until the 1950s, when oil supplanted coal as the chief source of reduced carbon, acetylene (and the aromatic fraction from coal tar) was the main source of organic chemicals in the chemical industry. It was prepared by the hydrolysis of calcium carbide, a reaction discovered by Friedrich Wöhler in 1862 and still familiar to students:
Calcium carbide production requires high temperatures, ~2000 °C, necessitating the use of an electric arc furnace. In the US, this process was an important part of the late-19th century revolution in chemistry enabled by the massive hydroelectric power project at Niagara Falls.
In the user, the carbide reacts with water to produce acetylene, 1 kg of carbide combining with 562.5 g of water to release 350 l of acetylene.
Bonding
In terms of valence bond theory, in each carbon atom the 2s orbital hybridizes with one 2p orbital thus forming an sp hybrid. The other two 2p orbitals remain unhybridized. The two ends of the two sp hybrid orbital overlap to form a strong σ valence bond between the carbons, while on each of the other two ends hydrogen atoms attach also by σ bonds. The two unchanged 2p orbitals form a pair of weaker π bonds.
Since acetylene is a linear symmetrical molecule, it possesses the D∞h point group.
Physical properties
Changes of state
At atmospheric pressure, acetylene cannot exist as a liquid and does not have a melting point. The triple point on the phase diagram corresponds to the melting point (−80.8 °C) at the minimal pressure at which liquid acetylene can exist (1.27 atm). At temperatures below the triple point, solid acetylene can change directly to the vapour (gas) by sublimation. The sublimation point at atmospheric pressure is −84.0 °C.
Other
At room temperature, the solubility of acetylene in acetone is 27.9 g per kg. For the same amount of dimethylformamide (DMF), the solubility is 51 g. At
20.26 bar, the solubility increases to 689.0 and 628.0 g for acetone and DMF, respectively. These solvents are used in pressurized gas cylinders.
Applications
Welding
Approximately 20% of acetylene is supplied by the industrial gases industry for oxyacetylene gas welding and cutting due to the high temperature of the flame. Combustion of acetylene with oxygen produces a flame of over , releasing 11.8 kJ/g. Oxyacetylene is the hottest burning common fuel gas. Acetylene is the third-hottest natural chemical flame after dicyanoacetylene's and cyanogen at . Oxy-acetylene welding was a popular welding process in previous decades. The development and advantages of arc-based welding processes have made oxy-fuel welding nearly extinct for many applications. Acetylene usage for welding has dropped significantly. On the other hand, oxy-acetylene welding equipment is quite versatile – not only because the torch is preferred for some sorts of iron or steel welding (as in certain artistic applications), but also because it lends itself easily to brazing, braze-welding, metal heating (for annealing or tempering, bending or forming), the loosening of corroded nuts and bolts, and other applications. Bell Canada cable-repair technicians still use portable acetylene-fuelled torch kits as a soldering tool for sealing lead sleeve splices in manholes and in some aerial locations. Oxyacetylene welding may also be used in areas where electricity is not readily accessible. Oxyacetylene cutting is used in many metal fabrication shops. For use in welding and cutting, the working pressures must be controlled by a regulator, since above , if subjected to a shockwave (caused, for example, by a flashback), acetylene decomposes explosively into hydrogen and carbon.
Chemicals
Acetylene, despite its simplicity, is not used for many industrial processes.
One of the major chemical applications is ethynylation of formaldehyde.
Acetylene adds to aldehydes and ketones to form α-ethynyl alcohols:
The reaction gives butynediol, with propargyl alcohol as the by-product. Copper acetylide is used as the catalyst.
In addition to ethynylation, acetylene reacts with carbon monoxide, acetylene reacts to give acrylic acid, or acrylic esters. Metal catalysts are required. These derivatives form products such as acrylic fibers, glasses, paints, resins, and polymers. Except in China, use of acetylene as a chemical feedstock has declined by 70% from 1965 to 2007 owing to cost and environmental considerations.
Historical uses
Prior to the widespread use of petrochemicals, coal-derived acetylene was a building block for several industrial chemicals. Thus acetylene can be hydrated to give acetaldehyde, which in turn can be oxidized to acetic acid. Processes leading to acrylates were also commericalized. Almost all of these processes because obsolete with the availability of petroleum-derived ethylene and propylene.
Niche applications
In 1881, the Russian chemist Mikhail Kucherov described the hydration of acetylene to acetaldehyde using catalysts such as mercury(II) bromide. Before the advent of the Wacker process, this reaction was conducted on an industrial scale.
The polymerization of acetylene with Ziegler–Natta catalysts produces polyacetylene films. Polyacetylene, a chain of CH centres with alternating single and double bonds, was one of the first discovered organic semiconductors. Its reaction with iodine produces a highly electrically conducting material. Although such materials are not useful, these discoveries led to the developments of organic semiconductors, as recognized by the Nobel Prize in Chemistry in 2000 to Alan J. Heeger, Alan G MacDiarmid, and Hideki Shirakawa.
In the 1920s, pure acetylene was experimentally used as an inhalation anesthetic.
Acetylene is sometimes used for carburization (that is, hardening) of steel when the object is too large to fit into a furnace.
Acetylene is used to volatilize carbon in radiocarbon dating. The carbonaceous material in an archeological sample is treated with lithium metal in a small specialized research furnace to form lithium carbide (also known as lithium acetylide). The carbide can then be reacted with water, as usual, to form acetylene gas to feed into a mass spectrometer to measure the isotopic ratio of carbon-14 to carbon-12.
Acetylene combustion produces a strong, bright light and the ubiquity of carbide lamps drove much acetylene commercialization in the early 20th century. Common applications included coastal lighthouses, street lights, and automobile and mining headlamps. In most of these applications, direct combustion is a fire hazard, and so acetylene has been replaced, first by incandescent lighting and many years later by low-power/high-lumen LEDs. Nevertheless, acetylene lamps remain in limited use in remote or otherwise inaccessible areas and in countries with a weak or unreliable central electric grid.
Natural occurrence
The energy richness of the C≡C triple bond and the rather high solubility of acetylene in water make it a suitable substrate for bacteria, provided an adequate source is available. A number of bacteria living on acetylene have been identified. The enzyme acetylene hydratase catalyzes the hydration of acetylene to give acetaldehyde:
Acetylene is a moderately common chemical in the universe, often associated with the atmospheres of gas giants. One curious discovery of acetylene is on Enceladus, a moon of Saturn. Natural acetylene is believed to form from catalytic decomposition of long-chain hydrocarbons at temperatures of and above. Since such temperatures are highly unlikely on such a small distant body, this discovery is potentially suggestive of catalytic reactions within that moon, making it a promising site to search for prebiotic chemistry.
Reactions
Vinylation reactions
In vinylation reactions, H−X compounds add across the triple bond. Alcohols and phenols add to acetylene to give vinyl ethers. Thiols give vinyl thioethers. Similarly, vinylpyrrolidone and vinylcarbazole are produced industrially by vinylation of 2-pyrrolidone and carbazole.
The hydration of acetylene is a vinylation reaction, but the resulting vinyl alcohol isomerizes to acetaldehyde. The reaction is catalyzed by mercury salts. This reaction once was the dominant technology for acetaldehyde production, but it has been displaced by the Wacker process, which affords acetaldehyde by oxidation of ethylene, a cheaper feedstock. A similar situation applies to the conversion of acetylene to the valuable vinyl chloride by hydrochlorination vs the oxychlorination of ethylene.
Vinyl acetate is used instead of acetylene for some vinylations, which are more accurately described as transvinylations. Higher esters of vinyl acetate have been used in the synthesis of vinyl formate.
Organometallic chemistry
Acetylene and its derivatives (2-butyne, diphenylacetylene, etc.) form complexes with transition metals. Its bonding to the metal is somewhat similar to that of ethylene complexes. These complexes are intermediates in many catalytic reactions such as alkyne trimerisation to benzene, tetramerization to cyclooctatetraene, and carbonylation to hydroquinone:
at basic conditions (50–, 20–).
In the presence of certain transition metals, alkynes undergo alkyne metathesis.
Metal acetylides, species of the formula , are also common. Copper(I) acetylide and silver acetylide can be formed in aqueous solutions with ease due to a favorable solubility equilibrium.
Acid-base reactions
Acetylene has a pKa of 25, acetylene can be deprotonated by a superbase to form an acetylide:
HC#CH + RM -> RH + HC#CM
Various organometallic and inorganic reagents are effective.
Hydrogenation
Acetylene can be semihydrogenated to ethylene, providing a feedstock for a variety of polyethylene plastics. Halogens add to the triple bond.
Safety and handling
Acetylene is not especially toxic, but when generated from calcium carbide, it can contain toxic impurities such as traces of phosphine and arsine, which give it a distinct garlic-like smell. It is also highly flammable, as are most light hydrocarbons, hence its use in welding. Its most singular hazard is associated with its intrinsic instability, especially when it is pressurized: under certain conditions acetylene can react in an exothermic addition-type reaction to form a number of products, typically benzene and/or vinylacetylene, possibly in addition to carbon and hydrogen. Consequently, acetylene, if initiated by intense heat or a shockwave, can decompose explosively if the absolute pressure of the gas exceeds about . Most regulators and pressure gauges on equipment report gauge pressure, and the safe limit for acetylene therefore is 101 kPagage, or 15 psig. It is therefore supplied and stored dissolved in acetone or dimethylformamide (DMF), contained in a gas cylinder with a porous filling (Agamassan), which renders it safe to transport and use, given proper handling. Acetylene cylinders should be used in the upright position to avoid withdrawing acetone during use.
Information on safe storage of acetylene in upright cylinders is provided by the OSHA, Compressed Gas Association, United States Mine Safety and Health Administration (MSHA), EIGA, and other agencies.
Copper catalyses the decomposition of acetylene, and as a result acetylene should not be transported in copper pipes.
Cylinders should be stored in an area segregated from oxidizers to avoid exacerbated reaction in case of fire/leakage. Acetylene cylinders should not be stored in confined spaces, enclosed vehicles, garages, and buildings, to avoid unintended leakage leading to explosive atmosphere. In the US, National Electric Code (NEC) requires consideration for hazardous areas including those where acetylene may be released during accidents or leaks. Consideration may include electrical classification and use of listed Group A electrical components in US. Further information on determining the areas requiring special consideration is in NFPA 497. In Europe, ATEX also requires consideration for hazardous areas where flammable gases may be released during accidents or leaks.
References
External links
Acetylene Production Plant and Detailed Process
Acetylene at Chemistry Comes Alive!
Movie explaining acetylene formation from calcium carbide and the explosive limits forming fire hazards
Calcium Carbide & Acetylene at The Periodic Table of Videos (University of Nottingham)
CDC – NIOSH Pocket Guide to Chemical Hazards – Acetylene
Alkynes
Fuel gas
Industrial gases
Synthetic fuel technologies
Explosive gases
|
https://en.wikipedia.org/wiki/Antibiotic
|
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the common cold or influenza; drugs which inhibit growth of viruses are termed antiviral drugs or antivirals rather than antibiotics. They are also not effective against fungi; drugs which inhibit growth of fungi are called antifungal drugs.
Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas non-antibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same goal of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include antiseptic drugs, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed.
Antibiotics have been used since ancient times. Many civilizations used topical application of moldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. However, the effectiveness and easy access to antibiotics have also led to their overuse and some bacteria have evolved resistance to them. The World Health Organization has classified antimicrobial resistance as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country". Global deaths attributable to antimicrobial resistance numbered 1.27 million in 2019.
Etymology
The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs. Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis. These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1947.
The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution. This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not.
The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively", which comes from βίωσις (biōsis), "way of life", and that from βίος (bios), "life". The term "antibacterial" derives from Greek ἀντί (anti), "against" + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane", because the first bacteria to be discovered were rod-shaped.
Usage
Medical uses
Antibiotics are used to treat or prevent bacterial infections, and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted. This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days.
When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance. To avoid surgery, antibiotics may be given for non-complicated acute appendicitis.
Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery. Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related.
The use of antibiotics for secondary prevention of coronary heart disease is not supported by current scientific evidence, and may actually increase cardiovascular mortality, all-cause mortality and the occurrence of stroke.
Routes of administration
There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose.
Global consumption
Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed.
Side effects
Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient. Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions. Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis.
Common side effects of oral antibiotics include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridium difficile. Taking probiotics during the course of antibiotic treatment can help prevent antibiotic-associated diarrhea. Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area. Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid.
Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones. They are also known to affect chloroplasts.
Interactions
Birth control pills
There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure. The majority of studies indicate antibiotics do not interfere with birth control pills, such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%). Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood. Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended.
In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients. Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial. Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives. More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception.
Alcohol
Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics with which alcohol consumption may cause serious side effects. Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered.
Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath. In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption. Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound.
Pharmacodynamics
The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial. The bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells. These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection. Since the activity of antibacterials depends frequently on its concentration, in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial.
To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy.
Combination therapy
In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect. Fosfomycin has the highest number of synergistic combinations among antibiotics and is almost always used as a partner drug. Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin. Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if one of the antibiotics was given as a monotherapy. For example, chloramphenicol and tetracyclines are antagonists to penicillins. However, this can vary depending on the species of bacteria. In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic.
In addition to combining one antibiotic with another, antibiotics are sometimes co-administered with resistance-modifying agents. For example, β-lactam antibiotics may be used in combination with β-lactamase inhibitors, such as clavulanic acid or sulbactam, when a patient is infected with a β-lactamase-producing strain of bacteria.
Classes
Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities, killing the bacteria. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic, inhibiting further growth (with the exception of bactericidal aminoglycosides). Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering classes of antibacterial compounds, four new classes of antibiotics were introduced to clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin).
Production
With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds. These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis. Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons.
Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions.
Resistance
The emergence of antibiotic-resistant bacteria is a common phenomenon mainly caused by the overuse/misuse. It represents a threat to health globally.
Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug. For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment. Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains.
Resistance may take the form of biodegradation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces.
The survival of bacteria often results from an inheritable resistance, but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use.
Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria.
Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms. Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability.
Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains. For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA. Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains. The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange. For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes. Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials. Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound.
Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were, for a while, well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials. The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections." On 26 May 2016, an E. coli "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic.
In recent years, even anaerobic bacteria, historically considered less concerning in terms of resistance, have demonstrated high rates of antibiotic resistance, particularly Bacteroides, for which resistance rates to penicillin have been reported to exceed 90%.
Misuse
Per The ICU Book "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them." Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. However, potential harm from antibiotics extends beyond selection of antimicrobial resistance and their overuse is associated with adverse effects for patients themselves, seen most clearly in critically ill patients in Intensive care units. Self-prescribing of antibiotics is an example of misuse. Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections. The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s. Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics.
Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them". Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics. The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered one of the drivers of antibiotic misuse.
Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics. The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies. A non-governmental organization campaign group is Keep Antibiotics Working. In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children.
The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003. Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production. However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742 and H.R. 2562) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed. These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association.
Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year.
There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations.
Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse.
Other forms of antibiotic associated harm include anaphylaxis, drug toxicity most notably kidney and liver damage, and super-infections with resistant organisms. Antibiotics are also known to affect mitochondrial function, and this may contribute to the bioenergetic failure of immune cells seen in sepsis. They also alter the microbiome of the gut, lungs and skin, which may be associated with adverse effects such as Clostridium difficile associated diarrhoea. Whilst antibiotics can clearly be lifesaving in patients with bacterial infections, their overuse, especially in patients where infections are hard to diagnose, can lead to harm via multiple mechanisms.
History
Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago. Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections. Nubian mummies studied in the 1990s were found to contain significant levels of tetracycline. The beer brewed at that time was conjectured to have been the source.
The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes.Various Essential oils have been shown to have anti-microbial properties.Along with this, the plants from which these oils have been derived from can be used as niche anti-microbial agents.
Synthetic antibiotics derived from dyes
Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Ehrlich noted certain dyes would colour human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan, now called arsphenamine.
This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907. Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Erlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910, Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden. The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine. The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology. Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913.
The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany, for which Domagk received the 1939 Nobel Prize in Physiology or Medicine. Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years. Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials.
Penicillin and other natural antibiotics
Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics".
In 1874, physician Sir William Roberts noted that cultures of the mould Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination. In 1876, physicist John Tyndall also contributed to this field.
In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mold.
In 1897, doctoral student Ernest Duchesne submitted a dissertation, "" (Contribution to the study of vital competition in micro-organisms: antagonism between moulds and microbes), the first known scholarly work to consider the therapeutic capabilities of moulds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and moulds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Duchesne's army service after getting his degree prevented him from doing any further research. Duchesne died of tuberculosis, a disease now treated by antibiotics.
In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain moulds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium rubens, in one of his culture plates. He observed that the presence of the mould killed or prevented the growth of the bacteria. Fleming postulated that the mould must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterised some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists.
Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942 and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety. For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming.
Florey credited René Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin. In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II. Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War.
Late 20th century
During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003.
Antibiotic pipeline
Both the WHO and the Infectious Disease Society of America report that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance. The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli. According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1-3 clinical trials as of May 2017. Antibiotics targeting multidrug-resistant Gram-positive pathogens remains a high priority.
A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia. The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis. New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection.
Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor. In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals. According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible."
Replenishing the antibiotic pipeline and developing other new therapies
Because antibiotic-resistant bacterial strains continue to emerge and spread, there is a constant need to develop new antibacterial treatments. Current strategies include traditional chemistry-based approaches such as natural product-based drug discovery, newer chemistry-based approaches such as drug design, traditional biology-based approaches such as immunoglobulin therapy, and experimental biology-based approaches such as phage therapy, fecal microbiota transplants, antisense RNA-based treatments, and CRISPR-Cas9-based treatments.
Natural product-based antibiotic discovery
Most of the antibiotics in current use are natural products or natural product derivatives, and bacterial, fungal, plant and animal extracts are being screened in the search for new antibiotics. Organisms may be selected for testing based on ecological, ethnomedical, genomic, or historical rationales. Medicinal plants, for example, are screened on the basis that they are used by traditional healers to prevent or cure infection and may therefore contain antibacterial compounds. Also, soil bacteria are screened on the basis that, historically, they have been a very rich source of antibiotics (with 70 to 80% of antibiotics in current use derived from the actinomycetes).
In addition to screening natural products for direct antibacterial activity, they are sometimes screened for the ability to suppress antibiotic resistance and antibiotic tolerance. For example, some secondary metabolites inhibit drug efflux pumps, thereby increasing the concentration of antibiotic able to reach its cellular target and decreasing bacterial resistance to the antibiotic. Natural products known to inhibit bacterial efflux pumps include the alkaloid lysergol, the carotenoids capsanthin and capsorubin, and the flavonoids rotenone and chrysin. Other natural products, this time primary metabolites rather than secondary metabolites, have been shown to eradicate antibiotic tolerance. For example, glucose, mannitol, and fructose reduce antibiotic tolerance in Escherichia coli and Staphylococcus aureus, rendering them more susceptible to killing by aminoglycoside antibiotics.
Natural products may be screened for the ability to suppress bacterial virulence factors too. Virulence factors are molecules, cellular structures and regulatory systems that enable bacteria to evade the body's immune defenses (e.g. urease, staphyloxanthin), move towards, attach to, and/or invade human cells (e.g. type IV pili, adhesins, internalins), coordinate the activation of virulence genes (e.g. quorum sensing), and cause disease (e.g. exotoxins). Examples of natural products with antivirulence activity include the flavonoid epigallocatechin gallate (which inhibits listeriolysin O), the quinone tetrangomycin (which inhibits staphyloxanthin), and the sesquiterpene zerumbone (which inhibits Acinetobacter baumannii motility).
Immunoglobulin therapy
Antibodies (anti-tetanus immunoglobulin) have been used in the treatment and prevention of tetanus since the 1910s, and this approach continues to be a useful way of controlling bacterial diseases. The monoclonal antibody bezlotoxumab, for example, has been approved by the US FDA and EMA for recurrent Clostridium difficile infection, and other monoclonal antibodies are in development (e.g. AR-301 for the adjunctive treatment of S. aureus ventilator-associated pneumonia). Antibody treatments act by binding to and neutralizing bacterial exotoxins and other virulence factors.
Phage therapy
Phage therapy is under investigation as a method of treating antibiotic-resistant strains of bacteria. Phage therapy involves infecting bacterial pathogens with viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism's intestinal microbiota. Bacteriophages, also known as phages, infect and kill bacteria primarily during lytic cycles. Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain. The high specificity of phage protects "good" bacteria from destruction.
Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails.
There are considerable regulatory hurdles that must be cleared for such therapies. Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option.
Fecal microbiota transplants
Fecal microbiota transplants involve transferring the full intestinal microbiota from a healthy human donor (in the form of stool) to patients with C. difficile infection. Although this procedure has not been officially approved by the US FDA, its use is permitted under some conditions in patients with antibiotic-resistant C. difficile infection. Cure rates are around 90%, and work is underway to develop stool banks, standardized products, and methods of oral delivery. Fecal microbiota transplantation has also been used more recently for inflammatory bowel diseases.
Antisense RNA-based treatments
Antisense RNA-based treatment (also known as gene silencing therapy) involves (a) identifying bacterial genes that encode essential proteins (e.g. the Pseudomonas aeruginosa genes acpP, lpxC, and rpsJ), (b) synthesizing single stranded RNA that is complementary to the mRNA encoding these essential proteins, and (c) delivering the single stranded RNA to the infection site using cell-penetrating peptides or liposomes. The antisense RNA then hybridizes with the bacterial mRNA and blocks its translation into the essential protein. Antisense RNA-based treatment has been shown to be effective in in vivo models of P. aeruginosa pneumonia.
In addition to silencing essential bacterial genes, antisense RNA can be used to silence bacterial genes responsible for antibiotic resistance. For example, antisense RNA has been developed that silences the S. aureus mecA gene (the gene that encodes modified penicillin-binding protein 2a and renders S. aureus strains methicillin-resistant). Antisense RNA targeting mecA mRNA has been shown to restore the susceptibility of methicillin-resistant staphylococci to oxacillin in both in vitro and in vivo studies.
CRISPR-Cas9-based treatments
In the early 2000s, a system was discovered that enables bacteria to defend themselves against invading viruses. The system, known as CRISPR-Cas9, consists of (a) an enzyme that destroys DNA (the nuclease Cas9) and (b) the DNA sequences of previously encountered viral invaders (CRISPR). These viral DNA sequences enable the nuclease to target foreign (viral) rather than self (bacterial) DNA.
Although the function of CRISPR-Cas9 in nature is to protect bacteria, the DNA sequences in the CRISPR component of the system can be modified so that the Cas9 nuclease targets bacterial resistance genes or bacterial virulence genes instead of viral genes. The modified CRISPR-Cas9 system can then be administered to bacterial pathogens using plasmids or bacteriophages. This approach has successfully been used to silence antibiotic resistance and reduce the virulence of enterohemorrhagic E. coli in an in vivo model of infection.
Reducing the selection pressure for antibiotic resistance
In addition to developing new antibacterial treatments, it is important to reduce the selection pressure for the emergence and spread of antibiotic resistance. Strategies to accomplish this include well-established infection control measures such as infrastructure improvement (e.g. less crowded housing), better sanitation (e.g. safe drinking water and food) and vaccine development, other approaches such as antibiotic stewardship, and experimental approaches such as the use of prebiotics and probiotics to prevent infection. Antibiotic cycling, where antibiotics are alternated by clinicians to treat microbial diseases, is proposed, but recent studies revealed such strategies are ineffective against antibiotic resistance.
Vaccines
Vaccines rely on immune modulation or augmentation. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases. Vaccines made from attenuated whole cells or lysates have been replaced largely by less reactogenic, cell-free vaccines consisting of purified components, including capsular polysaccharides and their conjugates, to protein carriers, as well as inactivated toxins (toxoids) and proteins.
See also
References
Further reading
External links
Anti-infective agents
.
|
https://en.wikipedia.org/wiki/Allotropy
|
Allotropy or allotropism () is the property of some chemical elements to exist in two or more different forms, in the same physical state, known as allotropes of the elements. Allotropes are different structural modifications of an element: the atoms of the element are bonded together in different manners.
For example, the allotropes of carbon include diamond (the carbon atoms are bonded together to form a cubic lattice of tetrahedra), graphite (the carbon atoms are bonded together in sheets of a hexagonal lattice), graphene (single sheets of graphite), and fullerenes (the carbon atoms are bonded together in spherical, tubular, or ellipsoidal formations).
The term allotropy is used for elements only, not for compounds. The more general term, used for any compound, is polymorphism, although its use is usually restricted to solid materials such as crystals. Allotropy refers only to different forms of an element within the same physical phase (the state of matter, such as a solid, liquid or gas). The differences between these states of matter would not alone constitute examples of allotropy. Allotropes of chemical elements are frequently referred to as polymorphs or as phases of the element.
For some elements, allotropes have different molecular formulae or different crystalline structures, as well as a difference in physical phase; for example, two allotropes of oxygen (dioxygen, O2, and ozone, O3) can both exist in the solid, liquid and gaseous states. Other elements do not maintain distinct allotropes in different physical phases; for example, phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state.
History
The concept of allotropy was originally proposed in 1840 by the Swedish scientist Baron Jöns Jakob Berzelius (1779–1848). The term is derived . After the acceptance of Avogadro's hypothesis in 1860, it was understood that elements could exist as polyatomic molecules, and two allotropes of oxygen were recognized as O2 and O3. In the early 20th century, it was recognized that other cases such as carbon were due to differences in crystal structure.
By 1912, Ostwald noted that the allotropy of elements is just a special case of the phenomenon of polymorphism known for compounds, and proposed that the terms allotrope and allotropy be abandoned and replaced by polymorph and polymorphism. Although many other chemists have repeated this advice, IUPAC and most chemistry texts still favour the usage of allotrope and allotropy for elements only.
Differences in properties of an element's allotropes
Allotropes are different structural forms of the same element and can exhibit quite different physical properties and chemical behaviours. The change between allotropic forms is triggered by the same forces that affect other structures, i.e., pressure, light, and temperature. Therefore, the stability of the particular allotropes depends on particular conditions. For instance, iron changes from a body-centered cubic structure (ferrite) to a face-centered cubic structure (austenite) above 906 °C, and tin undergoes a modification known as tin pest from a metallic form to a semiconductor form below 13.2 °C (55.8 °F). As an example of allotropes having different chemical behaviour, ozone (O3) is a much stronger oxidizing agent than dioxygen (O2).
List of allotropes
Typically, elements capable of variable coordination number and/or oxidation states tend to exhibit greater numbers of allotropic forms. Another contributing factor is the ability of an element to catenate.
Examples of allotropes include:
Non-metals
Metalloids
Metals
Among the metallic elements that occur in nature in significant quantities (56 up to U, without Tc and Pm), almost half (27) are allotropic at ambient pressure: Li, Be, Na, Ca, Ti, Mn, Fe, Co, Sr, Y, Zr, Sn, La, Ce, Pr, Nd, Sm, Gd, Tb, Dy, Yb, Hf, Tl, Th, Pa and U. Some phase transitions between allotropic forms of technologically relevant metals are those of Ti at 882 °C, Fe at 912 °C and 1394 °C, Co at 422 °C, Zr at 863 °C, Sn at 13 °C and U at 668 °C and 776 °C.
Lanthanides and actinides
Cerium, samarium, dysprosium and ytterbium have three allotropes.
Praseodymium, neodymium, gadolinium and terbium have two allotropes.
Plutonium has six distinct solid allotropes under "normal" pressures. Their densities vary within a ratio of some 4:3, which vastly complicates all kinds of work with the metal (particularly casting, machining, and storage). A seventh plutonium allotrope exists at very high pressures. The transuranium metals Np, Am, and Cm are also allotropic.
Promethium, americium, berkelium and californium have three allotropes each.
Nanoallotropes
In 2017, the concept of nanoallotropy was proposed by Rafal Klajn of the Organic Chemistry Department of the Weizmann Institute of Science. Nanoallotropes, or allotropes of nanomaterials, are nanoporous materials that have the same chemical composition (e.g., Au), but differ in their architecture at the nanoscale (that is, on a scale 10 to 100 times the dimensions of individual atoms). Such nanoallotropes may help create ultra-small electronic devices and find other industrial applications. The different nanoscale architectures translate into different properties, as was demonstrated for surface-enhanced Raman scattering performed on several different nanoallotropes of gold. A two-step method for generating nanoallotropes was also created.
See also
Isomer
Polymorphism (materials science)
Notes
References
External links
Allotropes – Chemistry Encyclopedia
Chemistry
Inorganic chemistry
Physical chemistry
|
https://en.wikipedia.org/wiki/Antiprism
|
In geometry, an antiprism or is a polyhedron composed of two parallel direct copies (not mirror images) of an polygon, connected by an alternating band of triangles. They are represented by the Conway notation .
Antiprisms are a subclass of prismatoids, and are a (degenerate) type of snub polyhedron.
Antiprisms are similar to prisms, except that the bases are twisted relatively to each other, and that the side faces (connecting the bases) are triangles, rather than quadrilaterals.
The dual polyhedron of an -gonal antiprism is an -gonal trapezohedron.
History
At the intersection of modern-day graph theory and coding theory, the triangulation of a set of points have interested mathematicians since Isaac Newton, who fruitlessly sought a mathematical proof of the kissing number problem in 1694. The existence of antiprisms was discussed, and their name was coined by Johannes Kepler, though it is possible that they were previously known to Archimedes, as they satisfy the same conditions on faces and on vertices as the Archimedean solids. According to Ericson and Zinoviev, Harold Scott MacDonald Coxeter wrote at length on the topic, and was among the first to apply the mathematics of Victor Schlegel to this field.
Knowledge in this field is "quite incomplete" and "was obtained fairly recently", i.e. in the 20th century. For example, as of 2001 it had been proven for only a limited number of non-trivial cases that the -gonal antiprism is the mathematically optimal arrangement of points in the sense of maximizing the minimum Euclidean distance between any two points on the set: in 1943 by László Fejes Tóth for 4 and 6 points (digonal and trigonal antiprisms, which are Platonic solids); in 1951 by Kurt Schütte and Bartel Leendert van der Waerden for 8 points (tetragonal antiprism, which is not a cube).
The chemical structure of binary compounds has been remarked to be in the family of antiprisms; especially those of the family of boron hydrides (in 1975) and carboranes because they are isoelectronic. This is a mathematically real conclusion reached by studies of X-ray diffraction patterns, and stems from the 1971 work of Kenneth Wade, the nominative source for Wade's rules of polyhedral skeletal electron pair theory.
Rare-earth metals such as the lanthanides form antiprismatic compounds with some of the halides or some of the iodides. The study of crystallography is useful here. Some lanthanides, when arranged in peculiar antiprismatic structures with chlorine and water, can form molecule-based magnets.
Right antiprism
For an antiprism with regular -gon bases, one usually considers the case where these two copies are twisted by an angle of degrees.
The axis of a regular polygon is the line perpendicular to the polygon plane and lying in the polygon centre.
For an antiprism with congruent regular -gon bases, twisted by an angle of degrees, more regularity is obtained if the bases have the same axis: are coaxial; i.e. (for non-coplanar bases): if the line connecting the base centers is perpendicular to the base planes. Then the antiprism is called a right antiprism, and its side faces are isosceles triangles.
Uniform antiprism
A uniform -antiprism has two congruent regular -gons as base faces, and equilateral triangles as side faces.
Uniform antiprisms form an infinite class of vertex-transitive polyhedra, as do uniform prisms. For , we have the regular tetrahedron as a digonal antiprism (degenerate antiprism); for , the regular octahedron as a triangular antiprism (non-degenerate antiprism).
Schlegel diagrams
Cartesian coordinates
Cartesian coordinates for the vertices of a right -antiprism (i.e. with regular -gon bases and isosceles triangle side faces) are:
where ;
if the -antiprism is uniform (i.e. if the triangles are equilateral), then:
Volume and surface area
Let be the edge-length of a uniform -gonal antiprism; then the volume is:
and the surface area is:
Furthermore, the volume of a right -gonal antiprism with side length of its bases and height is given by:
Note that the volume of a right -gonal prism with the same and is:
which is smaller than that of an antiprism.
Related polyhedra
There are an infinite set of truncated antiprisms, including a lower-symmetry form of the truncated octahedron (truncated triangular antiprism). These can be alternated to create snub antiprisms, two of which are Johnson solids, and the snub triangular antiprism is a lower symmetry form of the regular icosahedron.
Four-dimensional antiprisms can be defined as having two dual polyhedra as parallel opposite faces, so that each three-dimensional face between them comes from two dual parts of the polyhedra: a vertex and a dual polygon, or two dual edges. Every three-dimensional polyhedron is combinatorially equivalent to one of the two opposite faces of a four-dimensional antiprism, constructed from its canonical polyhedron and its polar dual. However, there exist four-dimensional polyhedra that cannot be combined with their duals to form five-dimensional antiprisms.
Symmetry
The symmetry group of a right -antiprism (i.e. with regular bases and isosceles side faces) is of order , except in the cases of:
: the regular tetrahedron, which has the larger symmetry group of order , which has three versions of as subgroups;
: the regular octahedron, which has the larger symmetry group of order , which has four versions of as subgroups.
The symmetry group contains inversion if and only if is odd.
The rotation group is of order , except in the cases of:
: the regular tetrahedron, which has the larger rotation group of order , which has three versions of as subgroups;
: the regular octahedron, which has the larger rotation group of order , which has four versions of as subgroups.
Note: The right -antiprisms have congruent regular -gon bases and congruent isosceles triangle side faces, thus have the same (dihedral) symmetry group as the uniform -antiprism, for .
Star antiprism
Uniform star antiprisms are named by their star polygon bases, {p/q}, and exist in prograde and in retrograde (crossed) solutions. Crossed forms have intersecting vertex figures, and are denoted by "inverted" fractions: p/(p – q) instead of p/q; example: 5/3 instead of 5/2.
A right star antiprism has two congruent coaxial regular convex or star polygon base faces, and 2n isosceles triangle side faces.
Any star antiprism with regular convex or star polygon bases can be made a right star antiprism (by translating and/or twisting one of its bases, if necessary).
In the retrograde forms but not in the prograde forms, the triangles joining the convex or star bases intersect the axis of rotational symmetry. Thus:
Retrograde star antiprisms with regular convex polygon bases cannot have all equal edge lengths, so cannot be uniform. "Exception": a retrograde star antiprism with equilateral triangle bases (vertex configuration: 3.3/2.3.3) can be uniform; but then, it has the appearance of an equilateral triangle: it is a degenerate star polyhedron.
Similarly, some retrograde star antiprisms with regular star polygon bases cannot have all equal edge lengths, so cannot be uniform. Example: a retrograde star antiprism with regular star 7/5-gon bases (vertex configuration: 3.3.3.7/5) cannot be uniform.
Also, star antiprism compounds with regular star p/q-gon bases can be constructed if p and q have common factors. Example: a star 10/4-antiprism is the compound of two star 5/2-antiprisms.
See also
Apeirogonal antiprism
Grand antiprism – a four-dimensional polytope
One World Trade Center, a building consisting primarily of an elongated square antiprism
Skew polygon
References
Chapter 2: Archimedean polyhedra, prisms and antiprisms
Nonconvex Prisms and Antiprisms
Paper models of prisms and antiprisms
Uniform polyhedra
Prismatoid polyhedra
Topological graph theory
Graph drawing
Coxeter groups
Elementary geometry
Polyhedra
Polytopes
Triangulation (geometry)
Knot invariants
|
https://en.wikipedia.org/wiki/Abzyme
|
An abzyme (from antibody and enzyme), also called catmab (from catalytic monoclonal antibody), and most often called catalytic antibody or sometimes catab, is a monoclonal antibody with catalytic activity. Abzymes are usually raised in lab animals immunized against synthetic haptens, but some natural abzymes can be found in normal humans (anti-vasoactive intestinal peptide autoantibodies) and in patients with autoimmune diseases such as systemic lupus erythematosus, where they can bind to and hydrolyze DNA. To date abzymes display only weak, modest catalytic activity and have not proved to be of any practical use. They are, however, subjects of considerable academic interest. Studying them has yielded important insights into reaction mechanisms, enzyme structure and function, catalysis, and the immune system itself.
Enzymes function by lowering the activation energy of the transition state of a chemical reaction, thereby enabling the formation of an otherwise less-favorable molecular intermediate between the reactant(s) and the product(s). If an antibody is developed to bind to a molecule that is structurally and electronically similar to the transition state of a given chemical reaction, the developed antibody will bind to, and stabilize, the transition state, just like a natural enzyme, lowering the activation energy of the reaction, and thus catalyzing the reaction. By raising an antibody to bind to a stable transition-state analog, a new and unique type of enzyme is produced.
So far, all catalytic antibodies produced have displayed only modest, weak catalytic activity. The reasons for low catalytic activity for these molecules have been widely discussed. Possibilities indicate that factors beyond the binding site may play an important role, in particular through protein dynamics. Some abzymes have been engineered to use metal ions and other cofactors to improve their catalytic activity.
History
The possibility of catalyzing a reaction by means of an antibody which binds the transition state was first suggested by William P. Jencks in 1969. In 1994 Peter G. Schultz and Richard A. Lerner received the prestigious Wolf Prize in Chemistry for developing catalytic antibodies for many reactions and popularizing their study into a significant sub-field of enzymology.
Abzymes in Human healthy Breast Milk
There are a broad range of abzymes in healthy human mothers with DNAse, RNAse, and protease activity.
Potential HIV treatment
In a June 2008 issue of the journal Autoimmunity Review, researchers S. Planque, Sudhir Paul, Ph.D, and Yasuhiro Nishiyama, Ph.D of the University Of Texas Medical School at Houston announced that they have engineered an abzyme that degrades the superantigenic region of the gp120 CD4 binding site. This is the one part of the HIV virus outer coating that does not change, because it is the attachment point to T lymphocytes, the key cell in cell-mediated immunity. Once infected by HIV, patients produce antibodies to the more changeable parts of the viral coat. The antibodies are ineffective because of the virus' ability to change their coats rapidly. Because this protein gp120 is necessary for HIV to attach, it does not change across different strains and is a point of vulnerability across the entire range of the HIV variant population.
The abzyme does more than bind to the site: it catalytically destroys the site, rendering the virus inert, and then can attack other HIV viruses. A single abzyme molecule can destroy thousands of HIV viruses.
References
Monoclonal antibodies
Immune system
Enzymes
|
https://en.wikipedia.org/wiki/Ampicillin
|
Ampicillin is an antibiotic belonging to the aminopenicillin class of the penicillin family. The drug is used to prevent and treat a number of bacterial infections, such as respiratory tract infections, urinary tract infections, meningitis, salmonellosis, and endocarditis. It may also be used to prevent group B streptococcal infection in newborns. It is used by mouth, by injection into a muscle, or intravenously.
Common side effects include rash, nausea, and diarrhea. It should not be used in people who are allergic to penicillin. Serious side effects may include Clostridium difficile colitis or anaphylaxis. While usable in those with kidney problems, the dose may need to be decreased. Its use during pregnancy and breastfeeding appears to be generally safe.
Ampicillin was discovered in 1958 and came into commercial use in 1961. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ampicillin as critically important for human medicine. It is available as a generic medication.
Medical uses
Diseases
Bacterial meningitis; an aminoglycoside can be added to increase efficacy against gram-negative meningitis bacteria
Endocarditis by enterococcal strains (off-label use); often given with an aminoglycoside
Gastrointestinal infections caused by contaminated water or food (for example, by Salmonella)
Genito-urinary tract infections
Healthcare-associated infections that are related to infections from using urinary catheters and that are unresponsive to other medications
Otitis media (middle ear infection)
Prophylaxis (i.e. to prevent infection) in those who previously had rheumatic heart disease or are undergoing dental procedures, vaginal hysterectomies, or C-sections. It is also used in pregnant woman who are carriers of group B streptococci to prevent early-onset neonatal infections.
Respiratory infections, including bronchitis, pharyngitis
Sinusitis
Sepsis
Whooping cough, to prevent and treat secondary infections
Ampicillin used to also be used to treat gonorrhea, but there are now too many strains resistant to penicillins.
Bacteria
Ampicillin is used to treat infections by many gram-positive and gram-negative bacteria. It was the first "broad spectrum" penicillin with activity against gram-positive bacteria, including Streptococcus pneumoniae, Streptococcus pyogenes, some isolates of Staphylococcus aureus (but not penicillin-resistant or methicillin-resistant strains), Trueperella, and some Enterococcus. It is one of the few antibiotics that works against multidrug resistant Enterococcus faecalis and E. faecium. Activity against gram-negative bacteria includes Neisseria meningitidis, some Haemophilus influenzae, and some of the Enterobacteriaceae (though most Enterobacteriaceae and Pseudomonas are resistant). Its spectrum of activity is enhanced by co-administration of sulbactam, a drug that inhibits beta lactamase, an enzyme produced by bacteria to inactivate ampicillin and related antibiotics. It is sometimes used in combination with other antibiotics that have different mechanisms of action, like vancomycin, linezolid, daptomycin, and tigecycline.
Available forms
Ampicillin can be administered by mouth, an intramuscular injection (shot) or by intravenous infusion. The oral form, available as capsules or oral suspensions, is not given as an initial treatment for severe infections, but rather as a follow-up to an IM or IV injection. For IV and IM injections, ampicillin is kept as a powder that must be reconstituted.
IV injections must be given slowly, as rapid IV injections can lead to convulsive seizures.
Specific populations
Ampicillin is one of the most used drugs in pregnancy, and has been found to be generally harmless both by the Food and Drug Administration in the U.S. (which classified it as category B) and the Therapeutic Goods Administration in Australia (which classified it as category A). It is the drug of choice for treating Listeria monocytogenes in pregnant women, either alone or combined with an aminoglycoside. Pregnancy increases the clearance of ampicillin by up to 50%, and a higher dose is thus needed to reach therapeutic levels.
Ampicillin crosses the placenta and remains in the amniotic fluid at 50–100% of the concentration in maternal plasma; this can lead to high concentrations of ampicillin in the newborn.
While lactating mothers secrete some ampicillin into their breast milk, the amount is minimal.
In newborns, ampicillin has a longer half-life and lower plasma protein binding. The clearance by the kidneys is lower, as kidney function has not fully developed.
Contraindications
Ampicillin is contraindicated in those with a hypersensitivity to penicillins, as they can cause fatal anaphylactic reactions. Hypersensitivity reactions can include frequent skin rashes and hives, exfoliative dermatitis, erythema multiforme, and a temporary decrease in both red and white blood cells.
Ampicillin is not recommended in people with concurrent mononucleosis, as over 40% of patients develop a skin rash.
Side effects
Ampicillin is comparatively less toxic than other antibiotics, and side effects are more likely in those who are sensitive to penicillins and those with a history of asthma or allergies. In very rare cases, it causes severe side effects such as angioedema, anaphylaxis, and C. difficile infection (that can range from mild diarrhea to serious pseudomembranous colitis). Some develop black "furry" tongue. Serious adverse effects also include seizures and serum sickness. The most common side effects, experienced by about 10% of users are diarrhea and rash. Less common side effects can be nausea, vomiting, itching, and blood dyscrasias. The gastrointestinal effects, such as hairy tongue, nausea, vomiting, diarrhea, and colitis, are more common with the oral form of penicillin. Other conditions may develop up several weeks after treatment.
Overdose
Ampicillin overdose can cause behavioral changes, confusion, blackouts, and convulsions, as well as neuromuscular hypersensitivity, electrolyte imbalance, and kidney failure.
Interactions
Ampicillin reacts with probenecid and methotrexate to decrease renal excretion. Large doses of ampicillin can increase the risk of bleeding with concurrent use of warfarin and other oral anticoagulants, possibly by inhibiting platelet aggregation. Ampicillin has been said to make oral contraceptives less effective, but this has been disputed. It can be made less effective by other antibiotic, such as chloramphenicol, erythromycin, cephalosporins, and tetracyclines. For example, tetracyclines inhibit protein synthesis in bacteria, reducing the target against which ampicillin acts. If given at the same time as aminoglycosides, it can bind to it and inactivate it. When administered separately, aminoglycosides and ampicillin can potentiate each other instead.
Ampicillin causes skin rashes more often when given with allopurinol.
Both the live cholera vaccine and live typhoid vaccine can be made ineffective if given with ampicillin. Ampicillin is normally used to treat cholera and typhoid fever, lowering the immunological response that the body has to mount.
Pharmacology
Mechanism of action
Ampicillin is in the penicillin group of beta-lactam antibiotics and is part of the aminopenicillin family. It is roughly equivalent to amoxicillin in terms of activity. Ampicillin is able to penetrate gram-positive and some gram-negative bacteria. It differs from penicillin G, or benzylpenicillin, only by the presence of an amino group. This amino group, present on both ampicillin and amoxicillin, helps these antibiotics pass through the pores of the outer membrane of gram-negative bacteria, such as E. coli, Proteus mirabilis, Salmonella enterica, and Shigella.
Ampicillin acts as an irreversible inhibitor of the enzyme transpeptidase, which is needed by bacteria to make the cell wall. It inhibits the third and final stage of bacterial cell wall synthesis in binary fission, which ultimately leads to cell lysis; therefore, ampicillin is usually bacteriolytic.
Pharmacokinetics
Ampicillin is well-absorbed from the GI tract (though food reduces its absorption), and reaches peak concentrations in one to two hours. The bioavailability is around 62% for parenteral routes. Unlike other penicillins, which usually bind 60–90% to plasma proteins, ampicillin binds to only 15–20%.
Ampicillin is distributed through most tissues, though it is concentrated in the liver and kidneys. It can also be found in the cerebrospinal fluid when the meninges become inflamed (such as, for example, meningitis). Some ampicillin is metabolized by hydrolyzing the beta-lactam ring to penicilloic acid, though most of it is excreted unchanged. In the kidneys, it is filtered out mostly by tubular secretion; some also undergoes glomerular filtration, and the rest is excreted in the feces and bile.
Hetacillin and pivampicillin are ampicillin esters that have been developed to increase bioavailability.
History
Ampicillin has been used extensively to treat bacterial infections since 1961. Until the introduction of ampicillin by the British company Beecham, penicillin therapies had only been effective against gram-positive organisms such as staphylococci and streptococci. Ampicillin (originally branded as "Penbritin") also demonstrated activity against gram-negative organisms such as H. influenzae, coliforms, and Proteus spp.
Cost
Ampicillin is relatively inexpensive. In the United States, it is available as a generic medication.
Veterinary use
In veterinary medicine, ampicillin is used in cats, dogs, and farm animals to treat:
Anal gland infections
Cutaneous infections, such as abscesses, cellulitis, and pustular dermatitis
E. coli and Salmonella infections in cattle, sheep, and goats (oral form). Ampicillin use for this purpose had declined as bacterial resistance has increased.
Mastitis in sows
Mixed aerobic–anaerobic infections, such as from cat bites
Multidrug-resistant Enterococcus faecalis and E. faecium
Prophylactic use in poultry against Salmonella and sepsis from E. coli or Staphylococcus aureus
Respiratory tract infections, including tonsilitis, bovine respiratory disease, shipping fever, bronchopneumonia, and calf and bovine pneumonia
Urinary tract infections in dogs
Horses are generally not treated with oral ampicillin, as they have low bioavailability of beta-lactams.
The half-life in animals is around that same of that in humans (just over an hour). Oral absorption is less than 50% in cats and dogs, and less than 4% in horses.
See also
Amoxycillin (p-hydroxy metabolite of ampicillin)
Azlocillin and pirbenicillin (urea and amide made from ampicillin)
Pivampicillin (special pro-drug of ampicillin)
References
External links
Enantiopure drugs
Penicillins
Phenyl compounds
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
|
https://en.wikipedia.org/wiki/Antigen
|
In immunology, an antigen (Ag) is a molecule, moiety, foreign particulate matter, or an allergen, such as pollen, that can bind to a specific antibody or T-cell receptor. The presence of antigens in the body may trigger an immune response.
Antigens can be proteins, peptides (amino acid chains), polysaccharides (chains of simple sugars), lipids, or nucleic acids. Antigens exist on normal cells, cancer cells, parasites, viruses, fungi, and bacteria.
Antigens are recognized by antigen receptors, including antibodies and T-cell receptors. Diverse antigen receptors are made by cells of the immune system so that each cell has a specificity for a single antigen. Upon exposure to an antigen, only the lymphocytes that recognize that antigen are activated and expanded, a process known as clonal selection. In most cases, antibodies are antigen-specific, meaning that an antibody can only react to and bind one specific antigen; in some instances, however, antibodies may cross-react to bind more than one antigen. The reaction between an antigen and an antibody is called the antigen-antibody reaction.
Antigen can originate either from within the body ("self-protein" or "self antigens") or from the external environment ("non-self"). The immune system identifies and attacks "non-self" external antigens. Antibodies usually do not react with self-antigens due to negative selection of T cells in the thymus and B cells in the bone marrow. The diseases in which antibodies react with self antigens and damage the body's own cells are called autoimmune diseases.
Vaccines are examples of antigens in an immunogenic form, which are intentionally administered to a recipient to induce the memory function of the adaptive immune system towards antigens of the pathogen invading that recipient. The vaccine for seasonal influenza is a common example.
Etymology
Paul Ehrlich coined the term antibody () in his side-chain theory at the end of the 19th century. In 1899, Ladislas Deutsch (László Detre) named the hypothetical substances halfway between bacterial constituents and antibodies "antigenic or immunogenic substances" (). He originally believed those substances to be precursors of antibodies, just as a zymogen is a precursor of an enzyme. But, by 1903, he understood that an antigen induces the production of immune bodies (antibodies) and wrote that the word antigen is a contraction of antisomatogen (). The Oxford English Dictionary indicates that the logical construction should be "anti(body)-gen".
The term originally referred to a substance that acts as an antibody generator.
Terminology
Epitope – the distinct surface features of an antigen, its antigenic determinant.Antigenic molecules, normally "large" biological polymers, usually present surface features that can act as points of interaction for specific antibodies. Any such feature constitutes an epitope. Most antigens have the potential to be bound by multiple antibodies, each of which is specific to one of the antigen's epitopes. Using the "lock and key" metaphor, the antigen can be seen as a string of keys (epitopes) each of which matches a different lock (antibody). Different antibody idiotypes, each have distinctly formed complementarity-determining regions.
Allergen – A substance capable of causing an allergic reaction. The (detrimental) reaction may result after exposure via ingestion, inhalation, injection, or contact with skin.
Superantigen – A class of antigens that cause non-specific activation of T-cells, resulting in polyclonal T-cell activation and massive cytokine release.
Tolerogen – A substance that invokes a specific immune non-responsiveness due to its molecular form. If its molecular form is changed, a tolerogen can become an immunogen.
Immunoglobulin-binding protein – Proteins such as protein A, protein G, and protein L that are capable of binding to antibodies at positions outside of the antigen-binding site. While antigens are the "target" of antibodies, immunoglobulin-binding proteins "attack" antibodies.
T-dependent antigen – Antigens that require the assistance of T cells to induce the formation of specific antibodies.
T-independent antigen – Antigens that stimulate B cells directly.
Immunodominant antigens – Antigens that dominate (over all others from a pathogen) in their ability to produce an immune response. T cell responses typically are directed against a relatively few immunodominant epitopes, although in some cases (e.g., infection with the malaria pathogen Plasmodium spp.) it is dispersed over a relatively large number of parasite antigens.
Antigen-presenting cells present antigens in the form of peptides on histocompatibility molecules. The T cells selectively recognize the antigens; depending on the antigen and the type of the histocompatibility molecule, different types of T cells will be activated. For T-cell receptor (TCR) recognition, the peptide must be processed into small fragments inside the cell and presented by a major histocompatibility complex (MHC). The antigen cannot elicit the immune response without the help of an immunologic adjuvant. Similarly, the adjuvant component of vaccines plays an essential role in the activation of the innate immune system.
An immunogen is an antigen substance (or adduct) that is able to trigger a humoral (innate) or cell-mediated immune response. It first initiates an innate immune response, which then causes the activation of the adaptive immune response. An antigen binds the highly variable immunoreceptor products (B-cell receptor or T-cell receptor) once these have been generated. Immunogens are those antigens, termed immunogenic, capable of inducing an immune response.
At the molecular level, an antigen can be characterized by its ability to bind to an antibody's paratopes. Different antibodies have the potential to discriminate among specific epitopes present on the antigen surface. A hapten is a small molecule that can only induce an immune response when attached to a larger carrier molecule, such as a protein. Antigens can be proteins, polysaccharides, lipids, nucleic acids or other biomolecules. This includes parts (coats, capsules, cell walls, flagella, fimbriae, and toxins) of bacteria, viruses, and other microorganisms. Non-microbial non-self antigens can include pollen, egg white, and proteins from transplanted tissues and organs or on the surface of transfused blood cells.
Sources
Antigens can be classified according to their source.
Exogenous antigens
Exogenous antigens are antigens that have entered the body from the outside, for example, by inhalation, ingestion or injection. The immune system's response to exogenous antigens is often subclinical. By endocytosis or phagocytosis, exogenous antigens are taken into the antigen-presenting cells (APCs) and processed into fragments. APCs then present the fragments to T helper cells (CD4+) by the use of class II histocompatibility molecules on their surface. Some T cells are specific for the peptide:MHC complex. They become activated and start to secrete cytokines, substances that activate cytotoxic T lymphocytes (CTL), antibody-secreting B cells, macrophages and other particles.
Some antigens start out as exogenous and later become endogenous (for example, intracellular viruses). Intracellular antigens can be returned to circulation upon the destruction of the infected cell.
Endogenous antigens
Endogenous antigens are generated within normal cells as a result of normal cell metabolism, or because of viral or intracellular bacterial infection. The fragments are then presented on the cell surface in the complex with MHC class I molecules. If activated cytotoxic CD8+ T cells recognize them, the T cells secrete various toxins that cause the lysis or apoptosis of the infected cell. In order to keep the cytotoxic cells from killing cells just for presenting self-proteins, the cytotoxic cells (self-reactive T cells) are deleted as a result of tolerance (negative selection). Endogenous antigens include xenogenic (heterologous), autologous and idiotypic or allogenic (homologous) antigens. Sometimes antigens are part of the host itself in an autoimmune disease.
Autoantigens
An autoantigen is usually a self-protein or protein complex (and sometimes DNA or RNA) that is recognized by the immune system of patients with a specific autoimmune disease. Under normal conditions, these self-proteins should not be the target of the immune system, but in autoimmune diseases, their associated T cells are not deleted and instead attack.
Neoantigens
Neoantigens are those that are entirely absent from the normal human genome. As compared with nonmutated self-proteins, neoantigens are of relevance to tumor control, as the quality of the T cell pool that is available for these antigens is not affected by central T cell tolerance. Technology to systematically analyze T cell reactivity against neoantigens became available only recently. Neoantigens can be directly detected and quantified.
Viral antigens
For virus-associated tumors, such as cervical cancer and a subset of head and neck cancers, epitopes derived from viral open reading frames contribute to the pool of neoantigens.
Tumor antigens
Tumor antigens are those antigens that are presented by MHC class I or MHC class II molecules on the surface of tumor cells. Antigens found only on such cells are called tumor-specific antigens (TSAs) and generally result from a tumor-specific mutation. More common are antigens that are presented by tumor cells and normal cells, called tumor-associated antigens (TAAs). Cytotoxic T lymphocytes that recognize these antigens may be able to destroy tumor cells.
Tumor antigens can appear on the surface of the tumor in the form of, for example, a mutated receptor, in which case they are recognized by B cells.
For human tumors without a viral etiology, novel peptides (neo-epitopes) are created by tumor-specific DNA alterations.
Process
A large fraction of human tumor mutations are effectively patient-specific. Therefore, neoantigens may also be based on individual tumor genomes. Deep-sequencing technologies can identify mutations within the protein-coding part of the genome (the exome) and predict potential neoantigens. In mice models, for all novel protein sequences, potential MHC-binding peptides were predicted. The resulting set of potential neoantigens was used to assess T cell reactivity. Exome–based analyses were exploited in a clinical setting, to assess reactivity in patients treated by either tumor-infiltrating lymphocyte (TIL) cell therapy or checkpoint blockade. Neoantigen identification was successful for multiple experimental model systems and human malignancies.
The false-negative rate of cancer exome sequencing is low—i.e.: the majority of neoantigens occur within exonic sequence with sufficient coverage. However, the vast majority of mutations within expressed genes do not produce neoantigens that are recognized by autologous T cells.
As of 2015 mass spectrometry resolution is insufficient to exclude many false positives from the pool of peptides that may be presented by MHC molecules. Instead, algorithms are used to identify the most likely candidates. These algorithms consider factors such as the likelihood of proteasomal processing, transport into the endoplasmic reticulum, affinity for the relevant MHC class I alleles and gene expression or protein translation levels.
The majority of human neoantigens identified in unbiased screens display a high predicted MHC binding affinity. Minor histocompatibility antigens, a conceptually similar antigen class are also correctly identified by MHC binding algorithms. Another potential filter examines whether the mutation is expected to improve MHC binding. The nature of the central TCR-exposed residues of MHC-bound peptides is associated with peptide immunogenicity.
Nativity
A native antigen is an antigen that is not yet processed by an APC to smaller parts. T cells cannot bind native antigens, but require that they be processed by APCs, whereas B cells can be activated by native ones.
Antigenic specificity
Antigenic specificity is the ability of the host cells to recognize an antigen specifically as a unique molecular entity and distinguish it from another with exquisite precision. Antigen specificity is due primarily to the side-chain conformations of the antigen. It is measurable and need not be linear or of a rate-limited step or equation. Both T cells and B cells are cellular components of adaptive immunity.
See also
References
Immune system
Biomolecules
|
https://en.wikipedia.org/wiki/Apus
|
Apus is a small constellation in the southern sky. It represents a bird-of-paradise, and its name means "without feet" in Greek because the bird-of-paradise was once wrongly believed to lack feet. First depicted on a celestial globe by Petrus Plancius in 1598, it was charted on a star atlas by Johann Bayer in his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted and gave the brighter stars their Bayer designations in 1756.
The five brightest stars are all reddish in hue. Shading the others at apparent magnitude 3.8 is Alpha Apodis, an orange giant that has around 48 times the diameter and 928 times the luminosity of the Sun. Marginally fainter is Gamma Apodis, another ageing giant star. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible with the naked eye. Two star systems have been found to have planets.
History
Apus was one of twelve constellations published by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. De Houtman included it in his southern star catalogue in 1603 under the Dutch name De Paradijs Voghel, "The Bird of Paradise", and Plancius called the constellation Paradysvogel Apis Indica; the first word is Dutch for "bird of paradise". Apis (Latin for "bee") is assumed to have been a typographical error for avis ("bird").
After its introduction on Plancius's globe, the constellation's first known appearance in a celestial atlas was in German cartographer Johann Bayer's Uranometria of 1603. Bayer called it Apis Indica while fellow astronomers Johannes Kepler and his son-in-law Jakob Bartsch called it Apus or Avis Indica. The name Apus is derived from the Greek apous, meaning "without feet". This referred to the Western misconception that the bird-of-paradise had no feet, which arose because the only specimens available in the West had their feet and wings removed. Such specimens began to arrive in Europe in 1522, when the survivors of Ferdinand Magellan's expedition brought them home. The constellation later lost some of its tail when Nicolas-Louis de Lacaille used those stars to establish Octans in the 1750s.
Characteristics
Covering 206.3 square degrees and hence 0.5002% of the sky, Apus ranks 67th of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 7°N. It is bordered by Ara, Triangulum Australe and Circinus to the north, Musca and Chamaeleon to the west, Octans to the south, and Pavo to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Aps". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of six segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −67.48° and −83.12°.
Features
Stars
Lacaille gave twelve stars Bayer designations, labelling them Alpha through to Kappa, including two stars next to each other as Delta and another two stars near each other as Kappa. Within the constellation's borders, there are 39 stars brighter than or equal to apparent magnitude 6.5. Beta, Gamma and Delta Apodis form a narrow triangle, with Alpha Apodis lying to the east. The five brightest stars are all red-tinged, which is unusual among constellations.
Alpha Apodis is an orange giant of spectral type K3III located 430 ± 20 light-years away from Earth, with an apparent magnitude of 3.8. It spent much of its life as a blue-white (B-type) main sequence star before expanding, cooling and brightening as it used up its core hydrogen. It has swollen to 48 times the Sun's diameter, and shines with a luminosity approximately 928 times that of the Sun, with a surface temperature of 4312 K. Beta Apodis is an orange giant 149 ± 2 light-years away, with a magnitude of 4.2. It is around 1.84 times as massive as the Sun, with a surface temperature of 4677 K. Gamma Apodis is a yellow giant of spectral type G8III located 150 ± 4 light-years away, with a magnitude of 3.87. It is approximately 63 times as luminous the Sun, with a surface temperature of 5279 K. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible through binoculars. Delta1 is a red giant star of spectral type M4III located 630 ± 30 light-years away. It is a semiregular variable that varies from magnitude +4.66 to +4.87, with pulsations of multiple periods of 68.0, 94.9 and 101.7 days. Delta2 is an orange giant star of spectral type K3III, located 550 ± 10 light-years away, with a magnitude of 5.3. The separate components can be resolved with the naked eye.
The fifth-brightest star is Zeta Apodis at magnitude 4.8, a star that has swollen and cooled to become an orange giant of spectral type K1III, with a surface temperature of 4649 K and a luminosity 133 times that of the Sun. It is 300 ± 4 light-years distant. Near Zeta is Iota Apodis, a binary star system 1,040 ± 60 light-years distant, that is composed of two blue-white main sequence stars that orbit each other every 59.32 years. Of spectral types B9V and B9.5 V, they are both over three times as massive as the Sun.
Eta Apodis is a white main sequence star located 140.8 ± 0.9 light-years distant. Of apparent magnitude 4.89, it is 1.77 times as massive, 15.5 times as luminous as the Sun and has 2.13 times its radius. Aged 250 ± 200 million years old, this star is emitting an excess of 24 μm infrared radiation, which may be caused by a debris disk of dust orbiting at a distance of more than 31 astronomical units from it.
Theta Apodis is a cool red giant of spectral type M7 III located 350 ± 30 light-years distant. It shines with a luminosity approximately 3879 times that of the Sun and has a surface temperature of 3151 K. A semiregular variable, it varies by 0.56 magnitudes with a period of 119 days—or approximately 4 months. It is losing mass at the rate of times the mass of the Sun per year through its stellar wind. Dusty material ejected from this star is interacting with the surrounding interstellar medium, forming a bow shock as the star moves through the galaxy. NO Apodis is a red giant of spectral type M3III that varies between magnitudes 5.71 and 5.95. Located 780 ± 20 light-years distant, it shines with a luminosity estimated at 2059 times that of the Sun and has a surface temperature of 3568 K. S Apodis is a rare R Coronae Borealis variable, an extremely hydrogen-deficient supergiant thought to have arisen as the result of the merger of two white dwarfs; fewer than 100 have been discovered as of 2012. It has a baseline magnitude of 9.7. R Apodis is a star that was given a variable star designation, yet has turned out not to be variable. Of magnitude 5.3, it is another orange giant.
Two star systems have had exoplanets discovered by doppler spectroscopy, and the substellar companion of a third star system—the sunlike star HD 131664—has since been found to be a brown dwarf with a calculated mass of the companion to 23 times that of Jupiter (minimum of 18 and maximum of 49 Jovian masses). HD 134606 is a yellow sunlike star of spectral type G6IV that has begun expanding and cooling off the main sequence. Three planets orbit it with periods of 12, 59.5 and 459 days, successively larger as they are further away from the star. HD 137388 is another star—of spectral type K2IV—that is cooler than the Sun and has begun cooling off the main sequence. Around 47% as luminous and 88% as massive as the Sun, with 85% of its diameter, it is thought to be around 7.4 ± 3.9 billion years old. It has a planet that is 79 times as massive as the Earth and orbits its sun every 330 days at an average distance of 0.89 astronomical units (AU).
Deep-sky objects
The Milky Way covers much of the constellation's area. Of the deep-sky objects in Apus, there are two prominent globular clusters—NGC 6101 and IC 4499—and a large faint nebula that covers several degrees east of Beta and Gamma Apodis. NGC 6101 is a globular cluster of apparent magnitude 9.2 located around 50,000 light-years distant from Earth, which is around 160 light-years across. Around 13 billion years old, it contains a high concentration of massive bright stars known as blue stragglers, thought to be the result of two stars merging. IC 4499 is a loose globular cluster in the medium-far galactic halo; its apparent magnitude is 10.6.
The galaxies in the constellation are faint. IC 4633 is a very faint spiral galaxy surrounded by a vast amount of Milky Way line-of-sight integrated flux nebulae—large faint clouds thought to be lit by large numbers of stars.
See also
IAU-recognized constellations
Notes
References
External links
The Deep Photographic Guide to the Constellations: Apus
The clickable Apus
Southern constellations
Constellations listed by Petrus Plancius
Dutch celestial cartography in the Age of Discovery
Astronomy in the Dutch Republic
1590s in the Dutch Republic
|
https://en.wikipedia.org/wiki/Avionics
|
Avionics (a blend of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform.
History
The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics".
Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy, so they required two-seat aircraft with a second crewman to tap on a telegraph key to spell out messages by Morse code. During World War I, AM voice two way radio sets were made possible in 1917 by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying.
Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnetron vacuum tube, in the famous Tizard Mission, significantly shortened the war. Modern avionics is a substantial portion of military aircraft spending. Aircraft like the F-15E and the now retired F-14 have roughly 20 percent of their budget spent on avionics. Most modern helicopters now have budget splits of 60/40 in favour of avionics.
The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-wire) and new navigation needs brought on by tighter airspaces, have pushed up development costs. The major change has been the recent boom in consumer flying. As more people begin to use planes as their primary method of transportation, more elaborate methods of controlling aircraft safely in these high restrictive airspaces have been invented.
Modern avionics
Avionics plays a heavy role in modernization initiatives like the Federal Aviation Administration's (FAA) Next Generation Air Transportation System project in the United States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint Planning and Development Office put forth a roadmap for avionics in six areas:
Published Routes and Procedures – Improved navigation and routing
Negotiated Trajectories – Adding data communications to create preferred routes dynamically
Delegated Separation – Enhanced situational awareness in the air and on the ground
LowVisibility/CeilingApproach/Departure – Allowing operations with weather constraints with less ground infrastructure
Surface Operations – To increase safety in approach and departure
ATM Efficiencies – Improving the air traffic management (ATM) process
Market
The Aircraft Electronics Association reports $1.73 billion avionics sales for the first three quarters of 2017 in business and general aviation, a 4.1% yearly improvement: 73.5% came from North America, forward-fit represented 42.3% while 57.7% were retrofits as the U.S. deadline of January 1, 2020 for mandatory ADS-B out approach.
Aircraft avionics
The cockpit of an aircraft is a typical location for avionic equipment, including control, monitoring, communication, navigation, weather, and anti-collision systems. The majority of aircraft power their avionics using 14- or 28‑volt DC electrical systems; however, larger, more sophisticated aircraft (such as airliners or military combat aircraft) have AC systems operating at 115 volts 400 Hz, AC. There are several major vendors of flight avionics, including The Boeing Company, Panasonic Avionics Corporation, Honeywell (which now owns Bendix/King), Universal Avionics Systems Corporation, Rockwell Collins (now Collins Aerospace), Thales Group, GE Aviation Systems, Garmin, Raytheon, Parker Hannifin, UTC Aerospace Systems (now Collins Aerospace), Selex ES (now Leonardo S.p.A.), Shadin Avionics, and Avidyne Corporation.
International standards for avionics equipment are prepared by the Airlines Electronic Engineering Committee (AEEC) and published by ARINC.
Communications
Communications connect the flight deck to the ground and the flight deck to the passengers. On‑board communications are provided by public-address systems and aircraft intercoms.
The VHF aviation communication system works on the airband of 118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the conversation is performed in simplex mode. Aircraft communication can also take place using HF (especially for trans-oceanic flights) or satellite communication.
Navigation
Air navigation is the determination of position and direction on or above the surface of the Earth. Avionics can use satellite navigation systems (such as GPS and WAAS), inertial navigation system (INS), ground-based radio navigation systems (such as VOR or LORAN), or any combination thereof. Some navigation systems such as GPS calculate the position automatically and display it to the flight crew on moving map displays. Older ground-based Navigation systems such as VOR or LORAN requires a pilot or navigator to plot the intersection of signals on a paper map to determine an aircraft's location; modern systems calculate the position automatically and display it to the flight crew on moving map displays.
Monitoring
The first hints of glass cockpits emerged in the 1970s when flight-worthy cathode ray tube (CRT) screens began to replace electromechanical displays, gauges and instruments. A "glass" cockpit refers to the use of computer monitors instead of gauges and other analog displays. Aircraft were getting progressively more displays, dials and information dashboards that eventually competed for space and pilot attention. In the 1970s, the average aircraft had more than 100 cockpit instruments and controls.
Glass cockpits started to come into being with the Gulfstream G‑IV private jet in 1985. One of the key challenges in glass cockpits is to balance how much control is automated and how much the pilot should do manually. Generally they try to automate flight operations while keeping the pilot constantly informed.
Aircraft flight-control system
Aircraft have means of automatically controlling flight. Autopilot was first invented by Lawrence Sperry during World War I to fly bomber planes steady enough to hit accurate targets from 25,000 feet. When it was first adopted by the U.S. military, a Honeywell engineer sat in the back seat with bolt cutters to disconnect the autopilot in case of emergency. Nowadays most commercial planes are equipped with aircraft flight control systems in order to reduce pilot error and workload at landing or takeoff.
The first simple commercial auto-pilots were used to control heading and altitude and had limited authority on things like thrust and flight control surfaces. In helicopters, auto-stabilization was used in a similar way. The first systems were electromechanical. The advent of fly-by-wire and electro-actuated flight surfaces (rather than the traditional hydraulic) has increased safety. As with displays and instruments, critical devices that were electro-mechanical had a finite life. With safety critical systems, the software is very strictly tested.
Fuel Systems
Fuel Quantity Indication System (FQIS) monitors the amount of fuel aboard. Using various sensors, such as capacitance tubes, temperature sensors, densitometers & level sensors, the FQIS computer calculates the mass of fuel remaining on board.
Fuel Control and Monitoring System (FCMS) reports fuel remaining on board in a similar manner, but, by controlling pumps & valves, also manages fuel transfers around various tanks.
Refuelling control to upload to a certain total mass of fuel and distribute it automatically.
Transfers during flight to the tanks that feed the engines. E.G. from fuselage to wing tanks
Centre of gravity control transfers from the tail (trim) tanks forward to the wings as fuel is expended
Maintaining fuel in the wing tips (to help stop the wings bending due to lift in flight) & transferring to the main tanks after landing
Controlling fuel jettison during an emergency to reduce the aircraft weight.
Collision-avoidance systems
To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution.
To help avoid controlled flight into terrain (CFIT), aircraft use systems such as ground-proximity warning systems (GPWS), which use radar altimeters as a key element. One of the major weaknesses of GPWS is the lack of "look-ahead" information, because it only provides altitude above terrain "look-down". In order to overcome this weakness, modern aircraft use a terrain awareness warning system (TAWS).
Flight recorders
Commercial aircraft cockpit data recorders, commonly known as "black boxes", store flight information and audio from the cockpit. They are often recovered from an aircraft after a crash to determine control settings and other parameters during the incident.
Weather systems
Weather systems such as weather radar (typically Arinc 708 on commercial aircraft) and lightning detectors are important for aircraft flying at night or in instrument meteorological conditions, where it is not possible for pilots to see the weather ahead. Heavy precipitation (as sensed by radar) or severe turbulence (as sensed by lightning activity) are both indications of strong convective activity and severe turbulence, and weather systems allow pilots to deviate around these areas.
Lightning detectors like the Stormscope or Strikefinder have become inexpensive enough that they are practical for light aircraft. In addition to radar and lightning detection, observations and extended radar pictures (such as NEXRAD) are now available through satellite data connections, allowing pilots to see weather conditions far beyond the range of their own in-flight systems. Modern displays allow weather information to be integrated with moving maps, terrain, and traffic onto a single screen, greatly simplifying navigation.
Modern weather systems also include wind shear and turbulence detection and terrain and traffic warning systems. In‑plane weather avionics are especially popular in Africa, India, and other countries where air-travel is a growing market, but ground support is not as well developed.
Aircraft management systems
There has been a progression towards centralized control of the multiple complex systems fitted to aircraft, including engine monitoring and management. Health and usage monitoring systems (HUMS) are integrated with aircraft management computers to give maintainers early warnings of parts that will need replacement.
The integrated modular avionics concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. It has been used in fourth generation jet fighters and the latest generation of airliners.
Mission or tactical avionics
Military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems. The vast array of sensors available to the military is used for whatever tactical means required. As with aircraft management, the bigger sensor platforms (like the E‑3D, JSTARS, ASTOR, Nimrod MRA4, Merlin HM Mk 1) have mission-management computers.
Police and EMS aircraft also carry sophisticated tactical sensors.
Military communications
While aircraft communications provide the backbone for safe flight, the tactical systems are designed to withstand the rigors of the battle field. UHF, VHF Tactical (30–88 MHz) and SatCom systems combined with ECCM methods, and cryptography secure the communications. Data links such as Link 11, 16, 22 and BOWMAN, JTRS and even TETRA provide the means of transmitting data (such as images, targeting information etc.).
Radar
Airborne radar was one of the first tactical sensors. The benefit of altitude providing range has meant a significant focus on airborne radar technologies. Radars include airborne early warning (AEW), anti-submarine warfare (ASW), and even weather radar (Arinc 708) and ground tracking/proximity radar.
The military uses radar in fast jets to help pilots fly at low levels. While the civil market has had weather radar for a while, there are strict rules about using it to navigate the aircraft.
Sonar
Dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats. Maritime support aircraft can drop active and passive sonar devices (sonobuoys) and these are also used to determine the location of enemy submarines.
Electro-optics
Electro-optic systems include devices such as the head-up display (HUD), forward looking infrared (FLIR), infrared search and track and other passive infrared devices (Passive infrared sensor). These are all used to provide imagery and information to the flight crew. This imagery is used for everything from search and rescue to navigational aids and target acquisition.
ESM/DAS
Electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats. They can be used to launch devices (in some cases automatically) to counter direct threats against the aircraft. They are also used to determine the state of a threat and identify it.
Aircraft networks
The avionics systems in military, commercial and advanced models of civilian aircraft are interconnected using an avionics databus. Common avionics databus protocols, with their primary application, include:
Aircraft Data Network (ADN): Ethernet derivative for Commercial Aircraft
Avionics Full-Duplex Switched Ethernet (AFDX): Specific implementation of ARINC 664 (ADN) for Commercial Aircraft
ARINC 429: Generic Medium-Speed Data Sharing for Private and Commercial Aircraft
ARINC 664: See ADN above
ARINC 629: Commercial Aircraft (Boeing 777)
ARINC 708: Weather Radar for Commercial Aircraft
ARINC 717: Flight Data Recorder for Commercial Aircraft
ARINC 825: CAN bus for commercial aircraft (for example Boeing 787 and Airbus A350)
Commercial Standard Digital Bus
IEEE 1394b: Military Aircraft
MIL-STD-1553: Military Aircraft
MIL-STD-1760: Military Aircraft
TTP – Time-Triggered Protocol: Boeing 787, Airbus A380, Fly-By-Wire Actuation Platforms from Parker Aerospace
See also
Astrionics, similar, for spacecraft
Acronyms and abbreviations in avionics
Avionics software
Emergency locator beacon
Emergency position-indicating radiobeacon station
Integrated modular avionics
Notes
Further reading
Avionics: Development and Implementation by Cary R. Spitzer (Hardcover – December 15, 2006)
Principles of Avionics, 4th Edition by Albert Helfrick, Len Buckwalter, and Avionics Communications Inc. (Paperback – July 1, 2007)
Avionics Training: Systems, Installation, and Troubleshooting by Len Buckwalter (Paperback – June 30, 2005)
Avionics Made Simple, by Mouhamed Abdulla, Ph.D.; Jaroslav V. Svoboda, Ph.D. and Luis Rodrigues, Ph.D. (Coursepack – Dec. 2005 - ).
External links
Avionics in Commercial Aircraft
Aircraft Electronics Association (AEA)
Pilot's Guide to Avionics
The Avionic Systems Standardisation Committee
Space Shuttle Avionics
Aviation Today Avionics magazine
RAES Avionics homepage
Aircraft instruments
Spacecraft components
Electronic engineering
|
https://en.wikipedia.org/wiki/Aeronautics
|
Aeronautics is the science or art involved with the study, design, and manufacturing of air flight–capable machines, and the techniques of operating aircraft and rockets within the atmosphere. The British Royal Aeronautical Society identifies the aspects of "aeronautical Art, Science and Engineering" and "The profession of Aeronautics (which expression includes Astronautics)."
While the term originally referred solely to operating the aircraft, it has since been expanded to include technology, business, and other aspects related to aircraft.
The term "aviation" is sometimes used interchangeably with aeronautics, although "aeronautics" includes lighter-than-air craft such as airships, and includes ballistic vehicles while "aviation" technically does not.
A significant part of aeronautical science is a branch of dynamics called aerodynamics, which deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft.
History
Early ideas
Attempts to fly without any real aeronautical understanding have been made from the earliest times, typically by constructing wings and jumping from a tower with crippling or lethal results.
Wiser investigators sought to gain some rational understanding through the study of bird flight. Medieval Islamic Golden Age scientists such as Abbas ibn Firnas also made such studies. The founders of modern aeronautics, Leonardo da Vinci in the Renaissance and Cayley in 1799, both began their investigations with studies of bird flight.
Man-carrying kites are believed to have been used extensively in ancient China. In 1282 the Italian explorer Marco Polo described the Chinese techniques then current. The Chinese also constructed small hot air balloons, or lanterns, and rotary-wing toys.
An early European to provide any scientific discussion of flight was Roger Bacon, who described principles of operation for the lighter-than-air balloon and the flapping-wing ornithopter, which he envisaged would be constructed in the future. The lifting medium for his balloon would be an "aether" whose composition he did not know.
In the late fifteenth century, Leonardo da Vinci followed up his study of birds with designs for some of the earliest flying machines, including the flapping-wing ornithopter and the rotating-wing helicopter. Although his designs were rational, they were not based on particularly good science. Many of his designs, such as a four-person screw-type helicopter, have severe flaws. He did at least understand that "An object offers as much resistance to the air as the air does to the object." (Newton would not publish the Third law of motion until 1687.) His analysis led to the realisation that manpower alone was not sufficient for sustained flight, and his later designs included a mechanical power source such as a spring. Da Vinci's work was lost after his death and did not reappear until it had been overtaken by the work of George Cayley.
Balloon flight
The modern era of lighter-than-air flight began early in the 17th century with Galileo's experiments in which he showed that air has weight. Around 1650 Cyrano de Bergerac wrote some fantasy novels in which he described the principle of ascent using a substance (dew) he supposed to be lighter than air, and descending by releasing a controlled amount of the substance. Francesco Lana de Terzi measured the pressure of air at sea level and in 1670 proposed the first scientifically credible lifting medium in the form of hollow metal spheres from which all the air had been pumped out. These would be lighter than the displaced air and able to lift an airship. His proposed methods of controlling height are still in use today; by carrying ballast which may be dropped overboard to gain height, and by venting the lifting containers to lose height. In practice de Terzi's spheres would have collapsed under air pressure, and further developments had to wait for more practicable lifting gases.
From the mid-18th century the Montgolfier brothers in France began experimenting with balloons. Their balloons were made of paper, and early experiments using steam as the lifting gas were short-lived due to its effect on the paper as it condensed. Mistaking smoke for a kind of steam, they began filling their balloons with hot smoky air which they called "electric smoke" and, despite not fully understanding the principles at work, made some successful launches and in 1783 were invited to give a demonstration to the French Académie des Sciences.
Meanwhile, the discovery of hydrogen led Joseph Black in to propose its use as a lifting gas, though practical demonstration awaited a gas tight balloon material. On hearing of the Montgolfier Brothers' invitation, the French Academy member Jacques Charles offered a similar demonstration of a hydrogen balloon. Charles and two craftsmen, the Robert brothers, developed a gas tight material of rubberised silk for the envelope. The hydrogen gas was to be generated by chemical reaction during the filling process.
The Montgolfier designs had several shortcomings, not least the need for dry weather and a tendency for sparks from the fire to set light to the paper balloon. The manned design had a gallery around the base of the balloon rather than the hanging basket of the first, unmanned design, which brought the paper closer to the fire. On their free flight, De Rozier and d'Arlandes took buckets of water and sponges to douse these fires as they arose. On the other hand, the manned design of Charles was essentially modern. As a result of these exploits, the hot air balloon became known as the Montgolfière type and the gas balloon the Charlière.
Charles and the Robert brothers' next balloon, La Caroline, was a Charlière that followed Jean Baptiste Meusnier's proposals for an elongated dirigible balloon, and was notable for having an outer envelope with the gas contained in a second, inner ballonet. On 19 September 1784, it completed the first flight of over 100 km, between Paris and Beuvry, despite the man-powered propulsive devices proving useless.
In an attempt the next year to provide both endurance and controllability, de Rozier developed a balloon having both hot air and hydrogen gas bags, a design which was soon named after him as the Rozière. The principle was to use the hydrogen section for constant lift and to navigate vertically by heating and allowing to cool the hot air section, in order to catch the most favourable wind at whatever altitude it was blowing. The balloon envelope was made of goldbeater's skin. The first flight ended in disaster and the approach has seldom been used since.
Cayley and the foundation of modern aeronautics
Sir George Cayley (1773–1857) is widely acknowledged as the founder of modern aeronautics. He was first called the "father of the aeroplane" in 1846 and Henson called him the "father of aerial navigation." He was the first true scientific aerial investigator to publish his work, which included for the first time the underlying principles and forces of flight.
In 1809 he began the publication of a landmark three-part treatise titled "On Aerial Navigation" (1809–1810). In it he wrote the first scientific statement of the problem, "The whole problem is confined within these limits, viz. to make a surface support a given weight by the application of power to the resistance of air." He identified the four vector forces that influence an aircraft: thrust, lift, drag and weight and distinguished stability and control in his designs.
He developed the modern conventional form of the fixed-wing aeroplane having a stabilising tail with both horizontal and vertical surfaces, flying gliders both unmanned and manned.
He introduced the use of the whirling arm test rig to investigate the aerodynamics of flight, using it to discover the benefits of the curved or cambered aerofoil over the flat wing he had used for his first glider. He also identified and described the importance of dihedral, diagonal bracing and drag reduction, and contributed to the understanding and design of ornithopters and parachutes.
Another significant invention was the tension-spoked wheel, which he devised in order to create a light, strong wheel for aircraft undercarriage.
The 19th century: Otto Lilienthal and the first human flights
During the 19th century Cayley's ideas were refined, proved and expanded on, culminating in the works of Otto Lilienthal.
Lilienthal was a German engineer and businessman who became known as the "flying man". He was the first person to make well-documented, repeated, successful flights with gliders, therefore making the idea of "heavier than air" a reality. Newspapers and magazines published photographs of Lilienthal gliding, favourably influencing public and scientific opinion about the possibility of flying machines becoming practical.
His work lead to him developing the concept of the modern wing. His flight attempts in Berlin in the year 1891 are seen as the beginning of human flight and the "Lilienthal Normalsegelapparat" is considered to be the first air plane in series production, making the Maschinenfabrik Otto Lilienthal in Berlin the first air plane production company in the world.
Otto Lilienthal is often referred to as either the "father of aviation" or "father of flight".
Other important investigators included Horatio Phillips.
Branches
Aeronautics may be divided into three main branches, Aviation, Aeronautical science and Aeronautical engineering.
Aviation
Aviation is the art or practice of aeronautics. Historically aviation meant only heavier-than-air flight, but nowadays it includes flying in balloons and airships.
Aeronautical engineering
Aeronautical engineering covers the design and construction of aircraft, including how they are powered, how they are used and how they are controlled for safe operation.
A major part of aeronautical engineering is aerodynamics, the science of passing through the air.
With the increasing activity in space flight, nowadays aeronautics and astronautics are often combined as aerospace engineering.
Aerodynamics
The science of aerodynamics deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft.
The study of aerodynamics falls broadly into three areas:
Incompressible flow occurs where the air simply moves to avoid objects, typically at subsonic speeds below that of sound (Mach 1).
Compressible flow occurs where shock waves appear at points where the air becomes compressed, typically at speeds above Mach 1.
Transonic flow occurs in the intermediate speed range around Mach 1, where the airflow over an object may be locally subsonic at one point and locally supersonic at another.
Rocketry
A rocket or rocket vehicle is a missile, spacecraft, aircraft or other vehicle which obtains thrust from a rocket engine. In all rockets, the exhaust is formed entirely from propellants carried within the rocket before use. Rocket engines work by action and reaction. Rocket engines push rockets forwards simply by throwing their exhaust backwards extremely fast.
Rockets for military and recreational uses date back to at least 13th-century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology of the Space Age, including setting foot on the Moon.
Rockets are used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While comparatively inefficient for low speed use, they are very lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency.
Chemical rockets are the most common type of rocket and they typically create their exhaust by the combustion of rocket propellant. Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks.
See also
References
Citations
Sources
External links
Aeronautics
Aviation Terminology
Jeppesen The AVIATION DICTIONARY for pilots and aviation technicians
DTIC ADA032206: Chinese-English Aviation and Space Dictionary
Courses
Research
+
Vehicle operation
Articles containing video clips
|
https://en.wikipedia.org/wiki/Actinide
|
The actinide () or actinoid () series encompasses the 14 metallic chemical elements with atomic numbers from 89 to 102, actinium through nobelium. The actinide series derives its name from the first element in the series, actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide.
The 1985 IUPAC Red Book recommends that actinoid be used rather than actinide, since the suffix -ide normally indicates a negative ion. However, owing to widespread current use, actinide is still allowed. Since actinoid literally means actinium-like (cf. humanoid or android), it has been argued for semantic reasons that actinium cannot logically be an actinoid, but IUPAC acknowledges its inclusion based on common usage.
All the actinides are f-block elements. Lawrencium is sometimes considered one as well, despite being a d-block element and a transition metal. The series mostly corresponds to the filling of the 5f electron shell, although in the ground state many have anomalous configurations involving the filling of the 6d shell due to interelectronic repulsion. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. They all have very large atomic and ionic radii and exhibit an unusually large range of physical properties. While actinium and the late actinides (from americium onwards) behave similarly to the lanthanides, the elements thorium, protactinium, and uranium are much more similar to transition metals in their chemistry, with neptunium and plutonium occupying an intermediate position.
All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These are used in nuclear reactors and nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors.
Of the actinides, primordial thorium and uranium occur naturally in substantial quantities. The radioactive decay of uranium produces transient amounts of actinium and protactinium, and atoms of neptunium and plutonium are occasionally produced from transmutation reactions in uranium ores. The other actinides are purely synthetic elements. Nuclear weapons tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium.
In presentations of the periodic table, the f-block elements are customarily shown as two additional rows below the main body of the table. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods).
Discovery, isolation and synthesis
Like the lanthanides, the actinides form a family of elements with similar properties. Within the actinides, there are two overlapping groups: transuranium elements, which follow uranium in the periodic table; and transplutonium elements, which follow plutonium. Compared to the lanthanides, which (except for promethium) are found in nature in appreciable quantities, most actinides are rare. Most do not occur in nature, and of those that do, only thorium and uranium do so in more than trace quantities. The most abundant or easily synthesized actinides are uranium and thorium, followed by plutonium, americium, actinium, protactinium, neptunium, and curium.
The existence of transuranium elements was suggested in 1934 by Enrico Fermi, based on his experiments. However, even though four actinides were known by that time, it was not yet understood that they formed a family similar to lanthanides. The prevailing view that dominated early research into transuranics was that they were regular elements in the 7th period, with thorium, protactinium and uranium corresponding to 6th-period hafnium, tantalum and tungsten, respectively. Synthesis of transuranics gradually undermined this point of view. By 1944, an observation that curium failed to exhibit oxidation states above 4 (whereas its supposed 6th period homolog, platinum, can reach oxidation state of 6) prompted Glenn Seaborg to formulate an "actinide hypothesis". Studies of known actinides and discoveries of further transuranic elements provided more data in support of this position, but the phrase "actinide hypothesis" (the implication being that a "hypothesis" is something that has not been decisively proven) remained in active use by scientists through the late 1950s.
At present, there are two major methods of producing isotopes of transplutonium elements: (1) irradiation of the lighter elements with neutrons; (2) irradiation with accelerated charged particles. The first method is more important for applications, as only neutron irradiation using nuclear reactors allows the production of sizeable amounts of synthetic actinides; however, it is limited to relatively light elements. The advantage of the second method is that elements heavier than plutonium, as well as neutron-deficient isotopes, can be obtained, which are not formed during neutron irradiation.
In 1962–1966, there were attempts in the United States to produce transplutonium isotopes using a series of six underground nuclear explosions. Small samples of rock were extracted from the blast area immediately after the test to study the explosion products, but no isotopes with mass number greater than 257 could be detected, despite predictions that such isotopes would have relatively long half-lives of α-decay. This non-observation was attributed to spontaneous fission owing to the large speed of the products and to other decay channels, such as neutron emission and nuclear fission.
From actinium to uranium
Uranium and thorium were the first actinides discovered. Uranium was identified in 1789 by the German chemist Martin Heinrich Klaproth in pitchblende ore. He named it after the planet Uranus, which had been discovered eight years earlier. Klaproth was able to precipitate a yellow compound (likely sodium diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. He then reduced the obtained yellow powder with charcoal, and extracted a black substance that he mistook for metal. Sixty years later, the French scientist Eugène-Melchior Péligot identified it as uranium oxide. He also isolated the first sample of uranium metal by heating uranium tetrachloride with metallic potassium. The atomic mass of uranium was then calculated as 120, but Dmitri Mendeleev in 1872 corrected it to 240 using his periodicity laws. This value was confirmed experimentally in 1882 by K. Zimmerman.
Thorium oxide was discovered by Friedrich Wöhler in the mineral thorianite, which was found in Norway (1827). Jöns Jacob Berzelius characterized this material in more detail in 1828. By reduction of thorium tetrachloride with potassium, he isolated the metal and named it thorium after the Norse god of thunder and lightning Thor. The same isolation method was later used by Péligot for uranium.
Actinium was discovered in 1899 by André-Louis Debierne, an assistant of Marie Curie, in the pitchblende waste left after removal of radium and polonium. He described the substance (in 1899) as similar to titanium and (in 1900) as similar to thorium. The discovery of actinium by Debierne was however questioned in 1971 and 2000, arguing that Debierne's publications in 1904 contradicted his earlier work of 1899–1900. This view instead credits the 1902 work of Friedrich Oskar Giesel, who discovered a radioactive element named emanium that behaved similarly to lanthanum. The name actinium comes from the , meaning beam or ray. This metal was discovered not by its own radiation but by the radiation of the daughter products. Owing to the close similarity of actinium and lanthanum and low abundance, pure actinium could only be produced in 1950. The term actinide was probably introduced by Victor Goldschmidt in 1937.
Protactinium was possibly isolated in 1900 by William Crookes. It was first identified in 1913, when Kasimir Fajans and Oswald Helmuth Göhring encountered the short-lived isotope 234mPa (half-life 1.17 minutes) during their studies of the 238U decay. They named the new element brevium (from Latin brevis meaning brief); the name was changed to protoactinium (from Greek πρῶτος + ἀκτίς meaning "first beam element") in 1918 when two groups of scientists, led by the Austrian Lise Meitner and Otto Hahn of Germany and Frederick Soddy and John Cranston of Great Britain, independently discovered the much longer-lived 231Pa. The name was shortened to protactinium in 1949. This element was little characterized until 1960, when A. G. Maddock and his co-workers in the U.K. isolated 130 grams of protactinium from 60 tonnes of waste left after extraction of uranium from its ore.
Neptunium and above
Neptunium (named for the planet Neptune, the next planet out from Uranus, after which uranium was named) was discovered by Edwin McMillan and Philip H. Abelson in 1940 in Berkeley, California. They produced the 239Np isotope (half-life = 2.4 days) by bombarding uranium with slow neutrons. It was the first transuranium element produced synthetically.
Transuranium elements do not occur in sizeable quantities in nature and are commonly synthesized via nuclear reactions conducted with nuclear reactors. For example, under irradiation with reactor neutrons, uranium-238 partially converts to plutonium-239:
This synthesis reaction was used by Fermi and his collaborators in their design of the reactors located at the Hanford Site, which produced significant amounts of plutonium-239 for the nuclear weapons of the Manhattan Project and the United States' post-war nuclear arsenal.
Actinides with the highest mass numbers are synthesized by bombarding uranium, plutonium, curium and californium with ions of nitrogen, oxygen, carbon, neon or boron in a particle accelerator. Thus nobelium was produced by bombarding uranium-238 with neon-22 as
_{92}^{238}U + _{10}^{22}Ne -> _{102}^{256}No + 4_0^1n.
The first isotopes of transplutonium elements, americium-241 and curium-242, were synthesized in 1944 by Glenn T. Seaborg, Ralph A. James and Albert Ghiorso. Curium-242 was obtained by bombarding plutonium-239 with 32-MeV α-particles
_{94}^{239}Pu + _2^4He -> _{96}^{242}Cm + _0^1n.
The americium-241 and curium-242 isotopes also were produced by irradiating plutonium in a nuclear reactor. The latter element was named after Marie Curie and her husband Pierre who are noted for discovering radium and for their work in radioactivity.
Bombarding curium-242 with α-particles resulted in an isotope of californium 245Cf (1950), and a similar procedure yielded in 1949 berkelium-243 from americium-241. The new elements were named after Berkeley, California, by analogy with its lanthanide homologue terbium, which was named after the village of Ytterby in Sweden.
In 1945, B. B. Cunningham obtained the first bulk chemical compound of a transplutonium element, namely americium hydroxide. Over the few years, milligram quantities of americium and microgram amounts of curium were accumulated that allowed production of isotopes of berkelium (Thomson, 1949) and californium (Thomson, 1950). Sizeable amounts of these elements were produced in 1958 (Burris B. Cunningham and Stanley G. Thomson), and the first californium compound (0.3 µg of CfOCl) was obtained in 1960 by B. B. Cunningham and J. C. Wallmann.
Einsteinium and fermium were identified in 1952–1953 in the fallout from the "Ivy Mike" nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Instantaneous exposure of uranium-238 to a large neutron flux resulting from the explosion produced heavy isotopes of uranium, including uranium-253 and uranium-255, and their β-decay yielded einsteinium-253 and fermium-255. The discovery of the new elements and the new data on neutron capture were initially kept secret on the orders of the US military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team were able to prepare einsteinium and fermium by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on those elements. The "Ivy Mike" studies were declassified and published in 1955. The first significant (submicrograms) amounts of einsteinium were produced in 1961 by Cunningham and colleagues, but this has not been done for fermium yet.
The first isotope of mendelevium, 256Md (half-life 87 min), was synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory R. Choppin, Bernard G. Harvey and Stanley G. Thompson when they bombarded an 253Es target with alpha particles in the 60-inch cyclotron of Berkeley Radiation Laboratory; this was the first isotope of any element to be synthesized one atom at a time.
There were several attempts to obtain isotopes of nobelium by Swedish (1957) and American (1958) groups, but the first reliable result was the synthesis of 256No by the Russian group (Georgy Flyorov et al.) in 1965, as acknowledged by the IUPAC in 1992. In their experiments, Flyorov et al. bombarded uranium-238 with neon-22.
In 1961, Ghiorso et al. obtained the first isotope of lawrencium by irradiating californium (mostly californium-252) with boron-10 and boron-11 ions. The mass number of this isotope was not clearly established (possibly 258 or 259) at the time. In 1965, 256Lr was synthesized by Flyorov et al. from 243Am and 18O. Thus IUPAC recognized the nuclear physics teams at Dubna and Berkeley as the co-discoverers of lawrencium.
Isotopes
32 isotopes of actinium and eight excited isomeric states of some of its nuclides were identified by 2016. Three isotopes, 225Ac, 227Ac and 228Ac, were found in nature and the others were produced in the laboratory; only the three natural isotopes are used in applications. Actinium-225 is a member of the radioactive neptunium series; it was first discovered in 1947 as a decay product of uranium-233, it is an α-emitter with a half-life of 10 days. Actinium-225 is less available than actinium-228, but is more promising in radiotracer applications. Actinium-227 (half-life 21.77 years) occurs in all uranium ores, but in small quantities. One gram of uranium (in radioactive equilibrium) contains only 2 gram of 227Ac. Actinium-228 is a member of the radioactive thorium series formed by the decay of 228Ra; it is a β− emitter with a half-life of 6.15 hours. In one tonne of thorium there is 5 gram of 228Ac. It was discovered by Otto Hahn in 1906.
There are 31 known isotopes of thorium ranging in mass number from 208 to 238. Of these, the longest-lived is 232Th, whose half-life of means that it still exists in nature as a primordial nuclide. The next longest-lived is 230Th, an intermediate decay product of 238U with a half-life of 75,400 years. Several other thorium isotopes have half-lives over a day; all of these are also transient in the decay chains of 232Th, 235U, and 238U.
28 isotopes of protactinium are known with mass numbers 212–239 as well as three excited isomeric states. Only 231Pa and 234Pa have been found in nature. All the isotopes have short lifetimes, except for protactinium-231 (half-life 32,760 years). The most important isotopes are 231Pa and 233Pa, which is an intermediate product in obtaining uranium-233 and is the most affordable among artificial isotopes of protactinium. 233Pa has convenient half-life and energy of γ-radiation, and thus was used in most studies of protactinium chemistry. Protactinium-233 is a β-emitter with a half-life of 26.97 days.
There are 26 known isotopes of uranium, having mass numbers 215–242 (except 220 and 241). Three of them, 234U, 235U and 238U, are present in appreciable quantities in nature. Among others, the most important is 233U, which is a final product of transformation of 232Th irradiated by slow neutrons. 233U has a much higher fission efficiency by low-energy (thermal) neutrons, compared e.g. with 235U. Most uranium chemistry studies were carried out on uranium-238 owing to its long half-life of 4.4 years.
There are 24 isotopes of neptunium with mass numbers of 219, 220, and 223–244; they are all highly radioactive. The most popular among scientists are long-lived 237Np (t1/2 = 2.20 years) and short-lived 239Np, 238Np (t1/2 ~ 2 days).
There are 20 known isotopes of plutonium, having mass numbers 228–247. The most stable isotope of plutonium is 244Pu with half-life of 8.13 years.
Eighteen isotopes of americium are known with mass numbers from 229 to 247 (with the exception of 231). The most important are 241Am and 243Am, which are alpha-emitters and also emit soft, but intense γ-rays; both of them can be obtained in an isotopically pure form. Chemical properties of americium were first studied with 241Am, but later shifted to 243Am, which is almost 20 times less radioactive. The disadvantage of 243Am is production of the short-lived daughter isotope 239Np, which has to be considered in the data analysis.
Among 19 isotopes of curium, ranging in mass number from 233 to 251, the most accessible are 242Cm and 244Cm; they are α-emitters, but with much shorter lifetime than the americium isotopes. These isotopes emit almost no γ-radiation, but undergo spontaneous fission with the associated emission of neutrons. More long-lived isotopes of curium (245–248Cm, all α-emitters) are formed as a mixture during neutron irradiation of plutonium or americium. Upon short irradiation, this mixture is dominated by 246Cm, and then 248Cm begins to accumulate. Both of these isotopes, especially 248Cm, have a longer half-life (3.48 years) and are much more convenient for carrying out chemical research than 242Cm and 244Cm, but they also have a rather high rate of spontaneous fission. 247Cm has the longest lifetime among isotopes of curium (1.56 years), but is not formed in large quantities because of the strong fission induced by thermal neutrons.
Seventeen isotopes of berkelium were identified with mass numbers 233–234, 236, 238, and 240–252. Only 249Bk is available in large quantities; it has a relatively short half-life of 330 days and emits mostly soft β-particles, which are inconvenient for detection. Its alpha radiation is rather weak (1.45% with respect to β-radiation), but is sometimes used to detect this isotope. 247Bk is an alpha-emitter with a long half-life of 1,380 years, but it is hard to obtain in appreciable quantities; it is not formed upon neutron irradiation of plutonium because of the β-stability of isotopes of curium isotopes with mass number below 248.
The 20 isotopes of californium with mass numbers 237–256 are formed in nuclear reactors; californium-253 is a β-emitter and the rest are α-emitters. The isotopes with even mass numbers (250Cf, 252Cf and 254Cf) have a high rate of spontaneous fission, especially 254Cf of which 99.7% decays by spontaneous fission. Californium-249 has a relatively long half-life (352 years), weak spontaneous fission and strong γ-emission that facilitates its identification. 249Cf is not formed in large quantities in a nuclear reactor because of the slow β-decay of the parent isotope 249Bk and a large cross section of interaction with neutrons, but it can be accumulated in the isotopically pure form as the β-decay product of (pre-selected) 249Bk. Californium produced by reactor-irradiation of plutonium mostly consists of 250Cf and 252Cf, the latter being predominant for large neutron fluences, and its study is hindered by the strong neutron radiation.
Among the 18 known isotopes of einsteinium with mass numbers from 240 to 257, the most affordable is 253Es. It is an α-emitter with a half-life of 20.47 days, a relatively weak γ-emission and small spontaneous fission rate as compared with the isotopes of californium. Prolonged neutron irradiation also produces a long-lived isotope 254Es (t1/2 = 275.5 days).
Twenty isotopes of fermium are known with mass numbers of 241–260. 254Fm, 255Fm and 256Fm are α-emitters with a short half-life (hours), which can be isolated in significant amounts. 257Fm (t1/2 = 100 days) can accumulate upon prolonged and strong irradiation. All these isotopes are characterized by high rates of spontaneous fission.
Among the 17 known isotopes of mendelevium (mass numbers from 244 to 260), the most studied is 256Md, which mainly decays through the electron capture (α-radiation is ≈10%) with the half-life of 77 minutes. Another alpha emitter, 258Md, has a half-life of 53 days. Both these isotopes are produced from rare einsteinium (253Es and 255Es respectively), that therefore limits their availability.
Long-lived isotopes of nobelium and isotopes of lawrencium (and of heavier elements) have relatively short half-lives. For nobelium, 11 isotopes are known with mass numbers 250–260 and 262. The chemical properties of nobelium and lawrencium were studied with 255No (t1/2 = 3 min) and 256Lr (t1/2 = 35 s). The longest-lived nobelium isotope, 259No, has a half-life of approximately 1 hour. Lawrencium has 13 known isotopes with mass numbers 251–262 and 266. The most stable of them all is 266Lr with a half life of 11 hours.
Among all of these, the only isotopes that occur in sufficient quantities in nature to be detected in anything more than traces and have a measurable contribution to the atomic weights of the actinides are the primordial 232Th, 235U, and 238U, and three long-lived decay products of natural uranium, 230Th, 231Pa, and 234U. Natural thorium consists of 0.02(2)% 230Th and 99.98(2)% 232Th; natural protactinium consists of 100% 231Pa; and natural uranium consists of 0.0054(5)% 234U, 0.7204(6)% 235U, and 99.2742(10)% 238U.
Formation in nuclear reactors
The figure buildup of actinides is a table of nuclides with the number of neutrons on the horizontal axis (isotopes) and the number of protons on the vertical axis (elements). The red dot divides the nuclides in two groups, so the figure is more compact. Each nuclide is represented by a square with the mass number of the element and its half-life. Naturally existing actinide isotopes (Th, U) are marked with a bold border, alpha emitters have a yellow colour, and beta emitters have a blue colour. Pink indicates electron capture (236Np), whereas white stands for a long-lasting metastable state (242Am).
The formation of actinide nuclides is primarily characterised by:
Neutron capture reactions (n,γ), which are represented in the figure by a short right arrow.
The (n,2n) reactions and the less frequently occurring (γ,n) reactions are also taken into account, both of which are marked by a short left arrow.
Even more rarely and only triggered by fast neutrons, the (n,3n) reaction occurs, which is represented in the figure with one example, marked by a long left arrow.
In addition to these neutron- or gamma-induced nuclear reactions, the radioactive conversion of actinide nuclides also affects the nuclide inventory in a reactor. These decay types are marked in the figure by diagonal arrows. The beta-minus decay, marked with an arrow pointing up-left, plays a major role for the balance of the particle densities of the nuclides. Nuclides decaying by positron emission (beta-plus decay) or electron capture (ϵ) do not occur in a nuclear reactor except as products of knockout reactions; their decays are marked with arrows pointing down-right. Due to the long half-lives of the given nuclides, alpha decay plays almost no role in the formation and decay of the actinides in a power reactor, as the residence time of the nuclear fuel in the reactor core is rather short (a few years). Exceptions are the two relatively short-lived nuclides 242Cm (T1/2 = 163 d) and 236Pu (T1/2 = 2.9 y). Only for these two cases, the α decay is marked on the nuclide map by a long arrow pointing down-left. A few long-lived actinide isotopes, such as 244Pu and 250Cm, cannot be produced in reactors because neutron capture does not happen quickly enough to bypass the short-lived beta-decaying nuclides 243Pu and 249Cm; they can however be generated in nuclear explosions, which have much higher neutron fluxes.
Distribution in nature
Thorium and uranium are the most abundant actinides in nature with the respective mass concentrations of 16 ppm and 4 ppm. Uranium mostly occurs in the Earth's crust as a mixture of its oxides in the mineral uraninite, which is also called pitchblende because of its black color. There are several dozens of other uranium minerals such as carnotite (KUO2VO4·3H2O) and autunite (Ca(UO2)2(PO4)2·nH2O). The isotopic composition of natural uranium is 238U (relative abundance 99.2742%), 235U (0.7204%) and 234U (0.0054%); of these 238U has the largest half-life of 4.51 years. The worldwide production of uranium in 2009 amounted to 50,572 tonnes, of which 27.3% was mined in Kazakhstan. Other important uranium mining countries are Canada (20.1%), Australia (15.7%), Namibia (9.1%), Russia (7.0%), and Niger (6.4%).
The most abundant thorium minerals are thorianite (), thorite () and monazite, (). Most thorium minerals contain uranium and vice versa; and they all have significant fraction of lanthanides. Rich deposits of thorium minerals are located in the United States (440,000 tonnes), Australia and India (~300,000 tonnes each) and Canada (~100,000 tonnes).
The abundance of actinium in the Earth's crust is only about 5%. Actinium is mostly present in uranium-containing, but also in other minerals, though in much smaller quantities. The content of actinium in most natural objects corresponds to the isotopic equilibrium of parent isotope 235U, and it is not affected by the weak Ac migration. Protactinium is more abundant (10−12%) in the Earth's crust than actinium. It was discovered in the uranium ore in 1913 by Fajans and Göhring. As actinium, the distribution of protactinium follows that of 235U.
The half-life of the longest-lived isotope of neptunium, 237Np, is negligible compared to the age of the Earth. Thus neptunium is present in nature in negligible amounts produced as intermediate decay products of other isotopes. Traces of plutonium in uranium minerals were first found in 1942, and the more systematic results on 239Pu are summarized in the table (no other plutonium isotopes could be detected in those samples). The upper limit of abundance of the longest-living isotope of plutonium, 244Pu, is 3%. Plutonium could not be detected in samples of lunar soil. Owing to its scarcity in nature, most plutonium is produced synthetically.
Extraction
Owing to the low abundance of actinides, their extraction is a complex, multistep process. Fluorides of actinides are usually used because they are insoluble in water and can be easily separated with redox reactions. Fluorides are reduced with calcium, magnesium or barium:
Among the actinides, thorium and uranium are the easiest to isolate. Thorium is extracted mostly from monazite: thorium pyrophosphate (ThP2O7) is reacted with nitric acid, and the produced thorium nitrate treated with tributyl phosphate. Rare-earth impurities are separated by increasing the pH in sulfate solution.
In another extraction method, monazite is decomposed with a 45% aqueous solution of sodium hydroxide at 140 °C. Mixed metal hydroxides are extracted first, filtered at 80 °C, washed with water and dissolved with concentrated hydrochloric acid. Next, the acidic solution is neutralized with hydroxides to pH = 5.8 that results in precipitation of thorium hydroxide (Th(OH)4) contaminated with ~3% of rare-earth hydroxides; the rest of rare-earth hydroxides remains in solution. Thorium hydroxide is dissolved in an inorganic acid and then purified from the rare earth elements. An efficient method is the dissolution of thorium hydroxide in nitric acid, because the resulting solution can be purified by extraction with organic solvents:
Th(OH)4 + 4 HNO3 → Th(NO3)4 + 4 H2O
Metallic thorium is separated from the anhydrous oxide, chloride or fluoride by reacting it with calcium in an inert atmosphere:
ThO2 + 2 Ca → 2 CaO + Th
Sometimes thorium is extracted by electrolysis of a fluoride in a mixture of sodium and potassium chloride at 700–800 °C in a graphite crucible. Highly pure thorium can be extracted from its iodide with the crystal bar process.
Uranium is extracted from its ores in various ways. In one method, the ore is burned and then reacted with nitric acid to convert uranium into a dissolved state. Treating the solution with a solution of tributyl phosphate (TBP) in kerosene transforms uranium into an organic form UO2(NO3)2(TBP)2. The insoluble impurities are filtered and the uranium is extracted by reaction with hydroxides as (NH4)2U2O7 or with hydrogen peroxide as UO4·2H2O.
When the uranium ore is rich in such minerals as dolomite, magnesite, etc., those minerals consume much acid. In this case, the carbonate method is used for uranium extraction. Its main component is an aqueous solution of sodium carbonate, which converts uranium into a complex [UO2(CO3)3]4−, which is stable in aqueous solutions at low concentrations of hydroxide ions. The advantages of the sodium carbonate method are that the chemicals have low corrosivity (compared to nitrates) and that most non-uranium metals precipitate from the solution. The disadvantage is that tetravalent uranium compounds precipitate as well. Therefore, the uranium ore is treated with sodium carbonate at elevated temperature and under oxygen pressure:
2 UO2 + O2 + 6 → 2 [UO2(CO3)3]4−
This equation suggests that the best solvent for the uranium carbonate processing is a mixture of carbonate with bicarbonate. At high pH, this results in precipitation of diuranate, which is treated with hydrogen in the presence of nickel yielding an insoluble uranium tetracarbonate.
Another separation method uses polymeric resins as a polyelectrolyte. Ion exchange processes in the resins result in separation of uranium. Uranium from resins is washed with a solution of ammonium nitrate or nitric acid that yields uranyl nitrate, UO2(NO3)2·6H2O. When heated, it turns into UO3, which is converted to UO2 with hydrogen:
UO3 + H2 → UO2 + H2O
Reacting uranium dioxide with hydrofluoric acid changes it to uranium tetrafluoride, which yields uranium metal upon reaction with magnesium metal:
4 HF + UO2 → UF4 + 2 H2O
To extract plutonium, neutron-irradiated uranium is dissolved in nitric acid, and a reducing agent (FeSO4, or H2O2) is added to the resulting solution. This addition changes the oxidation state of plutonium from +6 to +4, while uranium remains in the form of uranyl nitrate (UO2(NO3)2). The solution is treated with a reducing agent and neutralized with ammonium carbonate to pH = 8 that results in precipitation of Pu4+ compounds.
In another method, Pu4+ and are first extracted with tributyl phosphate, then reacted with hydrazine washing out the recovered plutonium.
The major difficulty in separation of actinium is the similarity of its properties with those of lanthanum. Thus actinium is either synthesized in nuclear reactions from isotopes of radium or separated using ion-exchange procedures.
Properties
Actinides have similar properties to lanthanides. The 6d and 7s electronic shells are filled in actinium and thorium, and the 5f shell is being filled with further increase in atomic number; the 4f shell is filled in the lanthanides. The first experimental evidence for the filling of the 5f shell in actinides was obtained by McMillan and Abelson in 1940. As in lanthanides (see lanthanide contraction), the ionic radius of actinides monotonically decreases with atomic number (see also Aufbau principle).
Physical properties
Actinides are typical metals. All of them are soft and have a silvery color (but tarnish in air), relatively high density and plasticity. Some of them can be cut with a knife. Their electrical resistivity varies between 15 and 150 µΩ·cm. The hardness of thorium is similar to that of soft steel, so heated pure thorium can be rolled in sheets and pulled into wire. Thorium is nearly half as dense as uranium and plutonium, but is harder than either of them. All actinides are radioactive, paramagnetic, and, with the exception of actinium, have several crystalline phases: plutonium has seven, and uranium, neptunium and californium three. The crystal structures of protactinium, uranium, neptunium and plutonium do not have clear analogs among the lanthanides and are more similar to those of the 3d-transition metals.
All actinides are pyrophoric, especially when finely divided, that is, they spontaneously ignite upon reaction with air at room temperature. The melting point of actinides does not have a clear dependence on the number of f-electrons. The unusually low melting point of neptunium and plutonium (~640 °C) is explained by hybridization of 5f and 6d orbitals and the formation of directional bonds in these metals.
Chemical properties
Like the lanthanides, all actinides are highly reactive with halogens and chalcogens; however, the actinides react more easily. Actinides, especially those with a small number of 5f-electrons, are prone to hybridization. This is explained by the similarity of the electron energies at the 5f, 7s and 6d shells. Most actinides exhibit a larger variety of valence states, and the most stable are +6 for uranium, +5 for protactinium and neptunium, +4 for thorium and plutonium and +3 for actinium and other actinides.
Actinium is chemically similar to lanthanum, which is explained by their similar ionic radii and electronic structures. Like lanthanum, actinium almost always has an oxidation state of +3 in compounds, but it is less reactive and has more pronounced basic properties. Among other trivalent actinides Ac3+ is least acidic, i.e. has the weakest tendency to hydrolyze in aqueous solutions.
Thorium is rather active chemically. Owing to lack of electrons on 6d and 5f orbitals, the tetravalent thorium compounds are colorless. At pH < 3, the solutions of thorium salts are dominated by the cations [Th(H2O)8]4+. The Th4+ ion is relatively large, and depending on the coordination number can have a radius between 0.95 and 1.14 Å. As a result, thorium salts have a weak tendency to hydrolyse. The distinctive ability of thorium salts is their high solubility both in water and polar organic solvents.
Protactinium exhibits two valence states; the +5 is stable, and the +4 state easily oxidizes to protactinium(V). Thus tetravalent protactinium in solutions is obtained by the action of strong reducing agents in a hydrogen atmosphere. Tetravalent protactinium is chemically similar to uranium(IV) and thorium(IV). Fluorides, phosphates, hypophosphate, iodate and phenylarsonates of protactinium(IV) are insoluble in water and dilute acids. Protactinium forms soluble carbonates. The hydrolytic properties of pentavalent protactinium are close to those of tantalum(V) and niobium(V). The complex chemical behavior of protactinium is a consequence of the start of the filling of the 5f shell in this element.
Uranium has a valence from 3 to 6, the last being most stable. In the hexavalent state, uranium is very similar to the group 6 elements. Many compounds of uranium(IV) and uranium(VI) are non-stoichiometric, i.e. have variable composition. For example, the actual chemical formula of uranium dioxide is UO2+x, where x varies between −0.4 and 0.32. Uranium(VI) compounds are weak oxidants. Most of them contain the linear "uranyl" group, . Between 4 and 6 ligands can be accommodated in an equatorial plane perpendicular to the uranyl group. The uranyl group acts as a hard acid and forms stronger complexes with oxygen-donor ligands than with nitrogen-donor ligands. and are also the common form of Np and Pu in the +6 oxidation state. Uranium(IV) compounds exhibit reducing properties, e.g., they are easily oxidized by atmospheric oxygen. Uranium(III) is a very strong reducing agent. Owing to the presence of d-shell, uranium (as well as many other actinides) forms organometallic compounds, such as UIII(C5H5)3 and UIV(C5H5)4.
Neptunium has valence states from 3 to 7, which can be simultaneously observed in solutions. The most stable state in solution is +5, but the valence +4 is preferred in solid neptunium compounds. Neptunium metal is very reactive. Ions of neptunium are prone to hydrolysis and formation of coordination compounds.
Plutonium also exhibits valence states between 3 and 7 inclusive, and thus is chemically similar to neptunium and uranium. It is highly reactive, and quickly forms an oxide film in air. Plutonium reacts with hydrogen even at temperatures as low as 25–50 °C; it also easily forms halides and intermetallic compounds. Hydrolysis reactions of plutonium ions of different oxidation states are quite diverse. Plutonium(V) can enter polymerization reactions.
The largest chemical diversity among actinides is observed in americium, which can have valence between 2 and 6. Divalent americium is obtained only in dry compounds and non-aqueous solutions (acetonitrile). Oxidation states +3, +5 and +6 are typical for aqueous solutions, but also in the solid state. Tetravalent americium forms stable solid compounds (dioxide, fluoride and hydroxide) as well as complexes in aqueous solutions. It was reported that in alkaline solution americium can be oxidized to the heptavalent state, but these data proved erroneous. The most stable valence of americium is 3 in the aqueous solutions and 3 or 4 in solid compounds.
Valence 3 is dominant in all subsequent elements up to lawrencium (with the exception of nobelium). Curium can be tetravalent in solids (fluoride, dioxide). Berkelium, along with a valence of +3, also shows the valence of +4, more stable than that of curium; the valence 4 is observed in solid fluoride and dioxide. The stability of Bk4+ in aqueous solution is close to that of Ce4+. Only valence 3 was observed for californium, einsteinium and fermium. The divalent state is proven for mendelevium and nobelium, and in nobelium it is more stable than the trivalent state. Lawrencium shows valence 3 both in solutions and solids.
The redox potential \mathit E_\frac{M^4+}{AnO2^2+} increases from −0.32 V in uranium, through 0.34 V (Np) and 1.04 V (Pu) to 1.34 V in americium revealing the increasing reduction ability of the An4+ ion from americium to uranium. All actinides form AnH3 hydrides of black color with salt-like properties. Actinides also produce carbides with the general formula of AnC or AnC2 (U2C3 for uranium) as well as sulfides An2S3 and AnS2.
Compounds
Oxides and hydroxides
An – actinide **Depending on the isotopes
Some actinides can exist in several oxide forms such as An2O3, AnO2, An2O5 and AnO3. For all actinides, oxides AnO3 are amphoteric and An2O3, AnO2 and An2O5 are basic, they easily react with water, forming bases:
An2O3 + 3 H2O → 2 An(OH)3.
These bases are poorly soluble in water and by their activity are close to the hydroxides of rare-earth metals.
Np(OH)3 has not yet been synthesized, Pu(OH)3 has a blue color while Am(OH)3 is pink and curium hydroxide Cm(OH)3 is colorless. Bk(OH)3 and Cf(OH)3 are also known, as are tetravalent hydroxides for Np, Pu and Am and pentavalent for Np and Am.
The strongest base is of actinium. All compounds of actinium are colorless, except for black actinium sulfide (Ac2S3). Dioxides of tetravalent actinides crystallize in the cubic system, same as in calcium fluoride.
Thorium reacting with oxygen exclusively forms the dioxide:
Th{} + O2 ->[\ce{1000^\circ C}] \overbrace{ThO2}^{Thorium~dioxide}
Thorium dioxide is a refractory material with the highest melting point among any known oxide (3390 °C). Adding 0.8–1% ThO2 to tungsten stabilizes its structure, so the doped filaments have better mechanical stability to vibrations. To dissolve ThO2 in acids, it is heated to 500–600 °C; heating above 600 °C produces a very resistant to acids and other reagents form of ThO2. Small addition of fluoride ions catalyses dissolution of thorium dioxide in acids.
Two protactinium oxides have been obtained: PaO2 (black) and Pa2O5 (white); the former is isomorphic with ThO2 and the latter is easier to obtain. Both oxides are basic, and Pa(OH)5 is a weak, poorly soluble base.
Decomposition of certain salts of uranium, for example UO2(NO3)·6H2O in air at 400 °C, yields orange or yellow UO3. This oxide is amphoteric and forms several hydroxides, the most stable being uranyl hydroxide UO2(OH)2. Reaction of uranium(VI) oxide with hydrogen results in uranium dioxide, which is similar in its properties with ThO2. This oxide is also basic and corresponds to the uranium hydroxide (U(OH)4).
Plutonium, neptunium and americium form two basic oxides: An2O3 and AnO2. Neptunium trioxide is unstable; thus, only Np3O8 could be obtained so far. However, the oxides of plutonium and neptunium with the chemical formula AnO2 and An2O3 are well characterized.
Salts
*An – actinide **Depending on the isotopes
Actinides easily react with halogens forming salts with the formulas MX3 and MX4 (X = halogen). So the first berkelium compound, BkCl3, was synthesized in 1962 with an amount of 3 nanograms. Like the halogens of rare earth elements, actinide chlorides, bromides, and iodides are water-soluble, and fluorides are insoluble. Uranium easily yields a colorless hexafluoride, which sublimates at a temperature of 56.5 °C; because of its volatility, it is used in the separation of uranium isotopes with gas centrifuge or gaseous diffusion. Actinide hexafluorides have properties close to anhydrides. They are very sensitive to moisture and hydrolyze forming AnO2F2. The pentachloride and black hexachloride of uranium were synthesized, but they are both unstable.
Action of acids on actinides yields salts, and if the acids are non-oxidizing then the actinide in the salt is in low-valence state:
U + 2 H2SO4 → U(SO4)2 + 2 H2
2 Pu + 6 HCl → 2 PuCl3 + 3 H2
However, in these reactions the regenerating hydrogen can react with the metal, forming the corresponding hydride. Uranium reacts with acids and water much more easily than thorium.
Actinide salts can also be obtained by dissolving the corresponding hydroxides in acids. Nitrates, chlorides, sulfates and perchlorates of actinides are water-soluble. When crystallizing from aqueous solutions, these salts forming a hydrates, such as Th(NO3)4·6H2O, Th(SO4)2·9H2O and Pu2(SO4)3·7H2O. Salts of high-valence actinides easily hydrolyze. So, colorless sulfate, chloride, perchlorate and nitrate of thorium transform into basic salts with formulas Th(OH)2SO4 and Th(OH)3NO3. The solubility and insolubility of trivalent and tetravalent actinides is like that of lanthanide salts. So phosphates, fluorides, oxalates, iodates and carbonates of actinides are weakly soluble in water; they precipitate as hydrates, such as ThF4·3H2O and Th(CrO4)2·3H2O.
Actinides with oxidation state +6, except for the AnO22+-type cations, form [AnO4]2−, [An2O7]2− and other complex anions. For example, uranium, neptunium and plutonium form salts of the Na2UO4 (uranate) and (NH4)2U2O7 (diuranate) types. In comparison with lanthanides, actinides more easily form coordination compounds, and this ability increases with the actinide valence. Trivalent actinides do not form fluoride coordination compounds, whereas tetravalent thorium forms K2ThF6, KThF5, and even K5ThF9 complexes. Thorium also forms the corresponding sulfates (for example Na2SO4·Th(SO4)2·5H2O), nitrates and thiocyanates. Salts with the general formula An2Th(NO3)6·nH2O are of coordination nature, with the coordination number of thorium equal to 12. Even easier is to produce complex salts of pentavalent and hexavalent actinides. The most stable coordination compounds of actinides – tetravalent thorium and uranium – are obtained in reactions with diketones, e.g. acetylacetone.
Applications
While actinides have some established daily-life applications, such as in smoke detectors (americium) and gas mantles (thorium), they are mostly used in nuclear weapons and as fuel in nuclear reactors. The last two areas exploit the property of actinides to release enormous energy in nuclear reactions, which under certain conditions may become self-sustaining chain reactions.
The most important isotope for nuclear power applications is uranium-235. It is used in the thermal reactor, and its concentration in natural uranium does not exceed 0.72%. This isotope strongly absorbs thermal neutrons releasing much energy. One fission act of 1 gram of 235U converts into about 1 MW·day. Of importance, is that emits more neutrons than it absorbs; upon reaching the critical mass, enters into a self-sustaining chain reaction. Typically, uranium nucleus is divided into two fragments with the release of 2–3 neutrons, for example:
+ ⟶ + + 3
Other promising actinide isotopes for nuclear power are thorium-232 and its product from the thorium fuel cycle, uranium-233.
Emission of neutrons during the fission of uranium is important not only for maintaining the nuclear chain reaction, but also for the synthesis of the heavier actinides. Uranium-239 converts via β-decay into plutonium-239, which, like uranium-235, is capable of spontaneous fission. The world's first nuclear reactors were built not for energy, but for producing plutonium-239 for nuclear weapons.
About half of the produced thorium is used as the light-emitting material of gas mantles. Thorium is also added into multicomponent alloys of magnesium and zinc. So the Mg-Th alloys are light and strong, but also have high melting point and ductility and thus are widely used in the aviation industry and in the production of missiles. Thorium also has good electron emission properties, with long lifetime and low potential barrier for the emission. The relative content of thorium and uranium isotopes is widely used to estimate the age of various objects, including stars (see radiometric dating).
The major application of plutonium has been in nuclear weapons, where the isotope plutonium-239 was a key component due to its ease of fission and availability. Plutonium-based designs allow reducing the critical mass to about a third of that for uranium-235. The "Fat Man"-type plutonium bombs produced during the Manhattan Project used explosive compression of plutonium to obtain significantly higher densities than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6.2 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT. (See also Nuclear weapon design.) Hypothetically, as little as 4 kg of plutonium—and maybe even less—could be used to make a single atomic bomb using very sophisticated assembly designs.
Plutonium-238 is potentially more efficient isotope for nuclear reactors, since it has smaller critical mass than uranium-235, but it continues to release much thermal energy (0.56 W/g) by decay even when the fission chain reaction is stopped by control rods. Its application is limited by its high price (about US$1000/g). This isotope has been used in thermopiles and water distillation systems of some space satellites and stations. So Galileo and Apollo spacecraft (e.g. Apollo 14) had heaters powered by kilogram quantities of plutonium-238 oxide; this heat is also transformed into electricity with thermopiles. The decay of plutonium-238 produces relatively harmless alpha particles and is not accompanied by gamma-irradiation. Therefore, this isotope (~160 mg) is used as the energy source in heart pacemakers where it lasts about 5 times longer than conventional batteries.
Actinium-227 is used as a neutron source. Its high specific energy (14.5 W/g) and the possibility of obtaining significant quantities of thermally stable compounds are attractive for use in long-lasting thermoelectric generators for remote use. 228Ac is used as an indicator of radioactivity in chemical research, as it emits high-energy electrons (2.18 MeV) that can be easily detected. 228Ac-228Ra mixtures are widely used as an intense gamma-source in industry and medicine.
Development of self-glowing actinide-doped materials with durable crystalline matrices is a new area of actinide utilization as the addition of alpha-emitting radionuclides to some glasses and crystals may confer luminescence.
Toxicity
Radioactive substances can harm human health via (i) local skin contamination, (ii) internal exposure due to ingestion of radioactive isotopes, and (iii) external overexposure by β-activity and γ-radiation. Together with radium and transuranium elements, actinium is one of the most dangerous radioactive poisons with high specific α-activity. The most important feature of actinium is its ability to accumulate and remain in the surface layer of skeletons. At the initial stage of poisoning, actinium accumulates in the liver. Another danger of actinium is that it undergoes radioactive decay faster than being excreted. Adsorption from the digestive tract is much smaller (~0.05%) for actinium than radium.
Protactinium in the body tends to accumulate in the kidneys and bones. The maximum safe dose of protactinium in the human body is 0.03 µCi that corresponds to 0.5 micrograms of 231Pa. This isotope, which might be present in the air as aerosol, is 2.5 times more toxic than hydrocyanic acid.
Plutonium, when entering the body through air, food or blood (e.g. a wound), mostly settles in the lungs, liver and bones with only about 10% going to other organs, and remains there for decades. The long residence time of plutonium in the body is partly explained by its poor solubility in water. Some isotopes of plutonium emit ionizing α-radiation, which damages the surrounding cells. The median lethal dose (LD50) for 30 days in dogs after intravenous injection of plutonium is 0.32 milligram per kg of body mass, and thus the lethal dose for humans is approximately 22 mg for a person weighing 70 kg; the amount for respiratory exposure should be approximately four times greater. Another estimate assumes that plutonium is 50 times less toxic than radium, and thus permissible content of plutonium in the body should be 5 µg or 0.3 µCi. Such amount is nearly invisible under microscope. After trials on animals, this maximum permissible dose was reduced to 0.65 µg or 0.04 µCi. Studies on animals also revealed that the most dangerous plutonium exposure route is through inhalation, after which 5–25% of inhaled substances is retained in the body. Depending on the particle size and solubility of the plutonium compounds, plutonium is localized either in the lungs or in the lymphatic system, or is absorbed in the blood and then transported to the liver and bones. Contamination via food is the least likely way. In this case, only about 0.05% of soluble 0.01% insoluble compounds of plutonium absorbs into blood, and the rest is excreted. Exposure of damaged skin to plutonium would retain nearly 100% of it.
Using actinides in nuclear fuel, sealed radioactive sources or advanced materials such as self-glowing crystals has many potential benefits. However, a serious concern is the extremely high radiotoxicity of actinides and their migration in the environment. Use of chemically unstable forms of actinides in MOX and sealed radioactive sources is not appropriate by modern safety standards. There is a challenge to develop stable and durable actinide-bearing materials, which provide safe storage, use and final disposal. A key need is application of actinide solid solutions in durable crystalline host phases.
Nuclear properties
See also
Actinides in the environment
Lanthanides
Major actinides
Minor actinides
Transuranics
Notes
References
Bibliography
External links
Lawrence Berkeley Laboratory image of historic periodic table by Seaborg showing actinide series for the first time
Lawrence Livermore National Laboratory, Uncovering the Secrets of the Actinides
Los Alamos National Laboratory, Actinide Research Quarterly
Periodic table
|
https://en.wikipedia.org/wiki/Alkaloid
|
Alkaloids are a class of basic, naturally occurring organic compounds that contain at least one nitrogen atom. This group also includes some related compounds with neutral and even weakly acidic properties. Some synthetic compounds of similar structure may also be termed alkaloids. In addition to carbon, hydrogen and nitrogen, alkaloids may also contain oxygen or sulfur. More rarely still, they may contain elements such as phosphorus, chlorine, and bromine.
Alkaloids are produced by a large variety of organisms including bacteria, fungi, plants, and animals. They can be purified from crude extracts of these organisms by acid-base extraction, or solvent extractions followed by silica-gel column chromatography. Alkaloids have a wide range of pharmacological activities including antimalarial (e.g. quinine), antiasthma (e.g. ephedrine), anticancer (e.g. homoharringtonine), cholinomimetic (e.g. galantamine), vasodilatory (e.g. vincamine), antiarrhythmic (e.g. quinidine), analgesic (e.g. morphine), antibacterial (e.g. chelerythrine), and antihyperglycemic activities (e.g. piperine). Many have found use in traditional or modern medicine, or as starting points for drug discovery. Other alkaloids possess psychotropic (e.g. psilocin) and stimulant activities (e.g. cocaine, caffeine, nicotine, theobromine), and have been used in entheogenic rituals or as recreational drugs. Alkaloids can be toxic too (e.g. atropine, tubocurarine). Although alkaloids act on a diversity of metabolic systems in humans and other animals, they almost uniformly evoke a bitter taste.
The boundary between alkaloids and other nitrogen-containing natural compounds is not clear-cut. Compounds like amino acid peptides, proteins, nucleotides, nucleic acid, amines, and antibiotics are usually not called alkaloids. Natural compounds containing nitrogen in the exocyclic position (mescaline, serotonin, dopamine, etc.) are usually classified as amines rather than as alkaloids. Some authors, however, consider alkaloids a special case of amines.
Naming
The name "alkaloids" () was introduced in 1819 by German chemist Carl Friedrich Wilhelm Meissner, and is derived from late Latin root and the Greek-language suffix -('like'). However, the term came into wide use only after the publication of a review article, by Oscar Jacobsen in the chemical dictionary of Albert Ladenburg in the 1880s.
There is no unique method for naming alkaloids. Many individual names are formed by adding the suffix "ine" to the species or genus name. For example, atropine is isolated from the plant Atropa belladonna; strychnine is obtained from the seed of the Strychnine tree (Strychnos nux-vomica L.). Where several alkaloids are extracted from one plant their names are often distinguished by variations in the suffix: "idine", "anine", "aline", "inine" etc. There are also at least 86 alkaloids whose names contain the root "vin" because they are extracted from vinca plants such as Vinca rosea (Catharanthus roseus); these are called vinca alkaloids.
History
Alkaloid-containing plants have been used by humans since ancient times for therapeutic and recreational purposes. For example, medicinal plants have been known in Mesopotamia from about 2000 BC. The Odyssey of Homer referred to a gift given to Helen by the Egyptian queen, a drug bringing oblivion. It is believed that the gift was an opium-containing drug. A Chinese book on houseplants written in 1st–3rd centuries BC mentioned a medical use of ephedra and opium poppies. Also, coca leaves have been used by Indigenous South Americans since ancient times.
Extracts from plants containing toxic alkaloids, such as aconitine and tubocurarine, were used since antiquity for poisoning arrows.
Studies of alkaloids began in the 19th century. In 1804, the German chemist Friedrich Sertürner isolated from opium a "soporific principle" (), which he called "morphium", referring to Morpheus, the Greek god of dreams; in German and some other Central-European languages, this is still the name of the drug. The term "morphine", used in English and French, was given by the French physicist Joseph Louis Gay-Lussac.
A significant contribution to the chemistry of alkaloids in the early years of its development was made by the French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou, who discovered quinine (1820) and strychnine (1818). Several other alkaloids were discovered around that time, including xanthine (1817), atropine (1819), caffeine (1820), coniine (1827), nicotine (1828), colchicine (1833), sparteine (1851), and cocaine (1860). The development of the chemistry of alkaloids was accelerated by the emergence of spectroscopic and chromatographic methods in the 20th century, so that by 2008 more than 12,000 alkaloids had been identified.
The first complete synthesis of an alkaloid was achieved in 1886 by the German chemist Albert Ladenburg. He produced coniine by reacting 2-methylpyridine with acetaldehyde and reducing the resulting 2-propenyl pyridine with sodium.
Classifications
Compared with most other classes of natural compounds, alkaloids are characterized by a great structural diversity. There is no uniform classification. Initially, when knowledge of chemical structures was lacking, botanical classification of the source plants was relied on. This classification is now considered obsolete.
More recent classifications are based on similarity of the carbon skeleton (e.g., indole-, isoquinoline-, and pyridine-like) or biochemical precursor (ornithine, lysine, tyrosine, tryptophan, etc.). However, they require compromises in borderline cases; for example, nicotine contains a pyridine fragment from nicotinamide and a pyrrolidine part from ornithine and therefore can be assigned to both classes.
Alkaloids are often divided into the following major groups:
"True alkaloids" contain nitrogen in the heterocycle and originate from amino acids. Their characteristic examples are atropine, nicotine, and morphine. This group also includes some alkaloids that besides the nitrogen heterocycle contain terpene (e.g., evonine) or peptide fragments (e.g. ergotamine). The piperidine alkaloids coniine and coniceine may be regarded as true alkaloids (rather than pseudoalkaloids: see below) although they do not originate from amino acids.
"Protoalkaloids", which contain nitrogen (but not the nitrogen heterocycle) and also originate from amino acids. Examples include mescaline, adrenaline and ephedrine.
Polyamine alkaloids – derivatives of putrescine, spermidine, and spermine.
Peptide and cyclopeptide alkaloids.
Pseudoalkaloids – alkaloid-like compounds that do not originate from amino acids. This group includes terpene-like and steroid-like alkaloids, as well as purine-like alkaloids such as caffeine, theobromine, theacrine and theophylline. Some authors classify ephedrine and cathinone as pseudoalkaloids. Those originate from the amino acid phenylalanine, but acquire their nitrogen atom not from the amino acid but through transamination.
Some alkaloids do not have the carbon skeleton characteristic of their group. So, galanthamine and homoaporphines do not contain isoquinoline fragment, but are, in general, attributed to isoquinoline alkaloids.
Main classes of monomeric alkaloids are listed in the table below:
Properties
Most alkaloids contain oxygen in their molecular structure; those compounds are usually colorless crystals at ambient conditions. Oxygen-free alkaloids, such as nicotine or coniine, are typically volatile, colorless, oily liquids. Some alkaloids are colored, like berberine (yellow) and sanguinarine (orange).
Most alkaloids are weak bases, but some, such as theobromine and theophylline, are amphoteric. Many alkaloids dissolve poorly in water but readily dissolve in organic solvents, such as diethyl ether, chloroform or 1,2-dichloroethane. Caffeine, cocaine, codeine and nicotine are slightly soluble in water (with a solubility of ≥1g/L), whereas others, including morphine and yohimbine are very slightly water-soluble (0.1–1 g/L). Alkaloids and acids form salts of various strengths. These salts are usually freely soluble in water and ethanol and poorly soluble in most organic solvents. Exceptions include scopolamine hydrobromide, which is soluble in organic solvents, and the water-soluble quinine sulfate.
Most alkaloids have a bitter taste or are poisonous when ingested. Alkaloid production in plants appeared to have evolved in response to feeding by herbivorous animals; however, some animals have evolved the ability to detoxify alkaloids. Some alkaloids can produce developmental defects in the offspring of animals that consume but cannot detoxify the alkaloids. One example is the alkaloid cyclopamine, produced in the leaves of corn lily. During the 1950s, up to 25% of lambs born by sheep that had grazed on corn lily had serious facial deformations. These ranged from deformed jaws to cyclopia (see picture). After decades of research, in the 1980s, the compound responsible for these deformities was identified as the alkaloid 11-deoxyjervine, later renamed to cyclopamine.
Distribution in nature
Alkaloids are generated by various living organisms, especially by higher plants – about 10 to 25% of those contain alkaloids. Therefore, in the past the term "alkaloid" was associated with plants.
The alkaloids content in plants is usually within a few percent and is inhomogeneous over the plant tissues. Depending on the type of plants, the maximum concentration is observed in the leaves (for example, black henbane), fruits or seeds (Strychnine tree), root (Rauvolfia serpentina) or bark (cinchona). Furthermore, different tissues of the same plants may contain different alkaloids.
Beside plants, alkaloids are found in certain types of fungus, such as psilocybin in the fruiting bodies of the genus Psilocybe, and in animals, such as bufotenin in the skin of some toads and a number of insects, markedly ants. Many marine organisms also contain alkaloids. Some amines, such as adrenaline and serotonin, which play an important role in higher animals, are similar to alkaloids in their structure and biosynthesis and are sometimes called alkaloids.
Extraction
Because of the structural diversity of alkaloids, there is no single method of their extraction from natural raw materials. Most methods exploit the property of most alkaloids to be soluble in organic solvents but not in water, and the opposite tendency of their salts.
Most plants contain several alkaloids. Their mixture is extracted first and then individual alkaloids are separated. Plants are thoroughly ground before extraction. Most alkaloids are present in the raw plants in the form of salts of organic acids. The extracted alkaloids may remain salts or change into bases. Base extraction is achieved by processing the raw material with alkaline solutions and extracting the alkaloid bases with organic solvents, such as 1,2-dichloroethane, chloroform, diethyl ether or benzene. Then, the impurities are dissolved by weak acids; this converts alkaloid bases into salts that are washed away with water. If necessary, an aqueous solution of alkaloid salts is again made alkaline and treated with an organic solvent. The process is repeated until the desired purity is achieved.
In the acidic extraction, the raw plant material is processed by a weak acidic solution (e.g., acetic acid in water, ethanol, or methanol). A base is then added to convert alkaloids to basic forms that are extracted with organic solvent (if the extraction was performed with alcohol, it is removed first, and the remainder is dissolved in water). The solution is purified as described above.
Alkaloids are separated from their mixture using their different solubility in certain solvents and different reactivity with certain reagents or by distillation.
A number of alkaloids are identified from insects, among which the fire ant venom alkaloids known as solenopsins have received greater attention from researchers. These insect alkaloids can be efficiently extracted by solvent immersion of live fire ants or by centrifugation of live ants followed by silica-gel chromatography purification. Tracking and dosing the extracted solenopsin ant alkaloids has been described as possible based on their absorbance peak around 232 nanometers.
Biosynthesis
Biological precursors of most alkaloids are amino acids, such as ornithine, lysine, phenylalanine, tyrosine, tryptophan, histidine, aspartic acid, and anthranilic acid. Nicotinic acid can be synthesized from tryptophan or aspartic acid. Ways of alkaloid biosynthesis are too numerous and cannot be easily classified. However, there are a few typical reactions involved in the biosynthesis of various classes of alkaloids, including synthesis of Schiff bases and Mannich reaction.
Synthesis of Schiff bases
Schiff bases can be obtained by reacting amines with ketones or aldehydes. These reactions are a common method of producing C=N bonds.
In the biosynthesis of alkaloids, such reactions may take place within a molecule, such as in the synthesis of piperidine:
Mannich reaction
An integral component of the Mannich reaction, in addition to an amine and a carbonyl compound, is a carbanion, which plays the role of the nucleophile in the nucleophilic addition to the ion formed by the reaction of the amine and the carbonyl.
The Mannich reaction can proceed both intermolecularly and intramolecularly:
Dimer alkaloids
In addition to the described above monomeric alkaloids, there are also dimeric, and even trimeric and tetrameric alkaloids formed upon condensation of two, three, and four monomeric alkaloids. Dimeric alkaloids are usually formed from monomers of the same type through the following mechanisms:
Mannich reaction, resulting in, e.g., voacamine
Michael reaction (villalstonine)
Condensation of aldehydes with amines (toxiferine)
Oxidative addition of phenols (dauricine, tubocurarine)
Lactonization (carpaine).
There are also dimeric alkaloids formed from two distinct monomers, such as the vinca alkaloids vinblastine and vincristine, which are formed from the coupling of catharanthine and vindoline. The newer semi-synthetic chemotherapeutic agent vinorelbine is used in the treatment of non-small-cell lung cancer. It is another derivative dimer of vindoline and catharanthine and is synthesised from anhydrovinblastine, starting either from leurosine or the monomers themselves.
Biological role
Alkaloids are among the most important and best-known secondary metabolites, i.e. biogenic substances not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. In some cases their function, if any, remains unclear. An early hypothesis, that alkaloids are the final products of nitrogen metabolism in plants, as urea and uric acid are in mammals, was refuted by the finding that their concentration fluctuates rather than steadily increasing.
Most of the known functions of alkaloids are related to protection. For example, aporphine alkaloid liriodenine produced by the tulip tree protects it from parasitic mushrooms. In addition, the presence of alkaloids in the plant prevents insects and chordate animals from eating it. However, some animals are adapted to alkaloids and even use them in their own metabolism. Such alkaloid-related substances as serotonin, dopamine and histamine are important neurotransmitters in animals. Alkaloids are also known to regulate plant growth. One example of an organism that uses alkaloids for protection is the Utetheisa ornatrix, more commonly known as the ornate moth. Pyrrolizidine alkaloids render these larvae and adult moths unpalatable to many of their natural enemies like coccinelid beetles, green lacewings, insectivorous hemiptera and insectivorous bats. Another example of alkaloids being utilized occurs in the poison hemlock moth (Agonopterix alstroemeriana). This moth feeds on its highly toxic and alkaloid-rich host plant poison hemlock (Conium maculatum) during its larval stage. A. alstroemeriana may benefit twofold from the toxicity of the naturally-occurring alkaloids, both through the unpalatability of the species to predators and through the ability of A. alstroemeriana to recognize Conium maculatum as the correct location for oviposition. A fire ant venom alkaloid known as solenopsin has been demonstrated to protect queens of invasive fire ants during the foundation of new nests, thus playing a central role in the spread of this pest ant species around the world.
Applications
In medicine
Medical use of alkaloid-containing plants has a long history, and, thus, when the first alkaloids were isolated in the 19th century, they immediately found application in clinical practice. Many alkaloids are still used in medicine, usually in the form of salts widely used including the following:
Many synthetic and semisynthetic drugs are structural modifications of the alkaloids, which were designed to enhance or change the primary effect of the drug and reduce unwanted side-effects. For example, naloxone, an opioid receptor antagonist, is a derivative of thebaine that is present in opium.
In agriculture
Prior to the development of a wide range of relatively low-toxic synthetic pesticides, some alkaloids, such as salts of nicotine and anabasine, were used as insecticides. Their use was limited by their high toxicity to humans.
Use as psychoactive drugs
Preparations of plants containing alkaloids and their extracts, and later pure alkaloids, have long been used as psychoactive substances. Cocaine, caffeine, and cathinone are stimulants of the central nervous system. Mescaline and many indole alkaloids (such as psilocybin, dimethyltryptamine and ibogaine) have hallucinogenic effect. Morphine and codeine are strong narcotic pain killers.
There are alkaloids that do not have strong psychoactive effect themselves, but are precursors for semi-synthetic psychoactive drugs. For example, ephedrine and pseudoephedrine are used to produce methcathinone and methamphetamine. Thebaine is used in the synthesis of many painkillers such as oxycodone.
See also
Amine
Base (chemistry)
List of poisonous plants
Mayer's reagent
Natural products
Palau'amine
Secondary metabolite
Explanatory notes
Citations
General and cited references
External links
|
https://en.wikipedia.org/wiki/Antibody
|
An antibody (Ab), also known as an immunoglobulin (Ig), is a large, Y-shaped protein used by the immune system to identify and neutralize foreign objects such as pathogenic bacteria and viruses. The antibody recognizes a unique molecule of the pathogen, called an antigen. Each tip of the "Y" of an antibody contains a paratope (analogous to a lock) that is specific for one particular epitope (analogous to a key) on an antigen, allowing these two structures to bind together with precision. Using this binding mechanism, an antibody can tag a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion).
To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety.
In contrast, the remainder of the antibody is relatively constant. In mammals, antibodies occur in a few variants, which define the antibody's class or isotype: IgA, IgD, IgE, IgG, and IgM.
The constant region at the trunk of the antibody includes sites involved in interactions with other components of the immune system. The class hence determines the function triggered by an antibody after binding to an antigen, in addition to some structural features.
Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response.
Together with B and T cells, antibodies comprise the most important part of the adaptive immune system.
They occur in two forms: one that is attached to a B cell, and the other, a soluble form, that is unattached and found in extracellular fluids such as blood plasma.
Initially, all antibodies are of the first form, attached to the surface of a B cell – these are then referred to as B-cell receptors (BCR).
After an antigen binds to a BCR, the B cell activates to proliferate and differentiate into either plasma cells, which secrete soluble antibodies with the same paratope, or memory B cells, which survive in the body to enable long-lasting immunity to the antigen.
Soluble antibodies are released into the blood and tissue fluids, as well as many secretions.
Because these fluids were traditionally known as humors, antibody-mediated immunity is sometimes known as, or considered a part of, humoral immunity.
The soluble Y-shaped units can occur individually as monomers, or in complexes of two to five units.
Antibodies are glycoproteins belonging to the immunoglobulin superfamily.
The terms antibody and immunoglobulin are often used interchangeably, though the term 'antibody' is sometimes reserved for the secreted, soluble form, i.e. excluding B-cell receptors.
Structure
Antibodies are heavy (~150 kDa) proteins of about 10 nm in size,
arranged in three globular regions that roughly form a Y shape.
In humans and most other mammals, an antibody unit consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds.
Each chain is a series of domains: somewhat similar sequences of about 110 amino acids each.
These domains are usually represented in simplified schematics as rectangles.
Light chains consist of one variable domain VL and one constant domain CL, while heavy chains contain one variable domain VH and three to four constant domains CH1, CH2, ...
Structurally an antibody is also partitioned into two antigen-binding fragments (Fab), containing one VL, VH, CL, and CH1 domain each, as well as the crystallisable fragment (Fc), forming the trunk of the Y shape.
In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily.
In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction.
Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ.
This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ (gamma) heavy chains which characterize the IgG class of antibodies.
Antigen-binding site
The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen.
More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody.
When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody.
These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen.
Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen.
Typically however only a few residues contribute to most of the binding energy.
The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger antigen-antibody complexes. The resulting cross-linking plays a role in activating other parts of the immune system.
The structures of CDRs have been clustered and classified by Chothia et al.
and more recently by North et al.
and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes.
Fc region
The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen.
Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway.
Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus.
Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues.
These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules.
Protein structure
The N-terminus of each chain is situated at the tip.
Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily:
it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif.
The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond.
Antibody complexes
Secreted antibodies can occur as a single Y-shaped unit, a monomer.
However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units).
Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex.
Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc.
Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies.
An extreme example is the clumping, or agglutination, of red blood cells with antibodies in the Coombs test to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation.
B cell receptors
The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors.
These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences.
Classes
Antibodies can come in different varieties known as isotypes or classes. In placental mammals there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2.
The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively.
The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region.
The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table.
For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do exist symptoms very similar to yet not technically asthma). The antibody's variable region binds to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules.
The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system.
Light chain types
In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei).
In non-mammalian animals
In most placental mammals, the structure of antibodies is generally the same.
Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier.
Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies.
Antibody–antigen interactions
The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants.
Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities.
Function
The main categories of antibody action include the following:
Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective
Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis
Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis
Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following:
Lysis of the foreign cell
Encouragement of inflammation by chemotactically attracting inflammatory cells
More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity.
Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures.
At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens).
Activation of complement
Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis).
Activation of effector cells
To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region.
Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens.
Natural antibodies
Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue.
Immunoglobulin diversity
Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes.
Domain variability
The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences between the variable domains are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below.
V(D)J recombination
Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells.
RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur.
After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain.
Somatic hypermutation and affinity maturation
Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains.
This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells.
Class switching
Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment.
Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype.
Specificity designations
An antibody can be called monospecific if it has specificity for the same antigen or epitope, or bispecific if they have affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell.
Asymmetrical antibodies
Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but does not suppress homodimer formation.
To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but are not limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms.
Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms do not have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality.
History
The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something.
The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization.
In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies.
Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies.
Medical applications
Disease diagnosis
Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed.
In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis.
Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women.
Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests.
New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer.
Disease therapy
Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer.
Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual.
Prenatal therapy
Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn.
Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. This occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself.
Research applications
Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography.
In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cells express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques.
Antibodies used in research are some of the most powerful, yet most problematic reagents with a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11).
Antibody regions can be used to further biomedical research by acting as a guide for drugs to reach their target. Several application involve using bacterial plasmids to tag plasmids with the Fc region of the antibody such as pFUSE-Fc plasmid.
Regulations
Production and testing
Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include:
The demonstration that the process is able to produce in good quality (the process should be validated)
The efficiency of the antibody purification (all impurities and virus must be eliminated)
The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...)
Determination of the virus clearance studies
Before clinical trials
Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product.
Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing).
Preclinical studies
Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models).
Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible.
Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing
Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects
Structure prediction and computational antibody design
The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enables computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs.
There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches.
Antibody mimetic
Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have being developed and commercialized as research, diagnostic and therapeutic agents.
Binding antibody unit
BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity.
See also
Affimer
Anti-mitochondrial antibodies
Anti-nuclear antibodies
Antibody mimetic
Aptamer
Colostrum
ELISA
Humoral immunity
Immunology
Immunosuppressive drug
Intravenous immunoglobulin (IVIg)
Magnetic immunoassay
Microantibody
Monoclonal antibody
Neutralizing antibody
Optimer Ligand
Secondary antibodies
Single-domain antibody
Slope spectroscopy
Synthetic antibody
Western blot normalization
References
External links
Mike's Immunoglobulin Structure/Function Page at University of Cambridge
Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank
A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford
How Lymphocytes Produce Antibody from Cells Alive!
Glycoproteins
Immunology
Reagents for biochemistry
|
https://en.wikipedia.org/wiki/Anode
|
An anode is an electrode of a polarized electrical device through which conventional current enters the device. This contrasts with a cathode, an electrode of the device through which conventional current leaves the device. A common mnemonic is ACID, for "anode current into device". The direction of conventional current (the flow of positive charges) in a circuit is opposite to the direction of electron flow, so (negatively charged) electrons flow from the anode of a galvanic cell, into an outside or external circuit connected to the cell. For example, the end of a household battery marked with a "+" is the cathode (while discharging).
In both a galvanic cell and an electrolytic cell, the anode is the electrode at which the oxidation reaction occurs. In a galvanic cell the anode is the wire or plate having excess negative charge as a result of the oxidation reaction. In an electrolytic cell, the anode is the wire or plate upon which excess positive charge is imposed. As a result of this, anions will tend to move towards the anode where they will undergo oxidation.
Historically, the anode of a galvanic cell was also known as the zincode because it was usually composed of zinc.
Charge flow
The terms anode and cathode are not defined by the voltage polarity of electrodes but the direction of current through the electrode. An anode is an electrode of a device through which conventional current (positive charge) flows into the device from an external circuit, while a cathode is an electrode through which conventional current flows out of the device. If the current through the electrodes reverses direction, as occurs for example in a rechargeable battery when it is being charged, the roles of the electrodes as anode and cathode are reversed.
Conventional current depends not only on the direction the charge carriers move, but also the carriers' electric charge. The currents outside the device are usually carried by electrons in a metal conductor. Since electrons have a negative charge, the direction of electron flow is opposite to the direction of conventional current. Consequently, electrons leave the device through the anode and enter the device through the cathode.
The definition of anode and cathode is different for electrical devices such as diodes and vacuum tubes where the electrode naming is fixed and does not depend on the actual charge flow (current). These devices usually allow substantial current flow in one direction but negligible current in the other direction. Therefore, the electrodes are named based on the direction of this "forward" current. In a diode the anode is the terminal through which current enters and the cathode is the terminal through which current leaves, when the diode is forward biased. The names of the electrodes do not change in cases where reverse current flows through the device. Similarly, in a vacuum tube only one electrode can emit electrons into the evacuated tube due to being heated by a filament, so electrons can only enter the device from the external circuit through the heated electrode. Therefore, this electrode is permanently named the cathode, and the electrode through which the electrons exit the tube is named the anode.
Examples
The polarity of voltage on an anode with respect to an associated cathode varies depending on the device type and on its operating mode. In the following examples, the anode is negative in a device that provides power, and positive in a device that consumes power:
In a discharging battery or galvanic cell (diagram on left), the anode is the negative terminal: it is where conventional current flows into the cell. This inward current is carried externally by electrons moving outwards.
In a recharging battery, or an electrolytic cell, the anode is the positive terminal imposed by an external source of potential difference. The current through a recharging battery is opposite to the direction of current during discharge; in other words, the electrode which was the cathode during battery discharge becomes the anode while the battery is recharging.
In battery engineering, it is common to designate one electrode of a rechargeable battery the anode and the other the cathode according to the roles the electrodes play when the battery is discharged. This is despite the fact that the roles are reversed when the battery is charged. When this is done, "anode" simply designates the negative terminal of the battery and "cathode" designates the positive terminal.
In a diode, the anode is the terminal represented by the tail of the arrow symbol (flat side of the triangle), where conventional current flows into the device. Note the electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current.
In vacuum tubes or gas-filled tubes, the anode is the terminal where current enters the tube.
Etymology
The word was coined in 1834 from the Greek ἄνοδος (anodos), 'ascent', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the anode is where the current enters the electrolyte, on the East side: "ano upwards, odos a way; the way which the sun rises".
The use of 'East' to mean the 'in' direction (actually 'in' → 'East' → 'sunrise' → 'up') may appear contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "eisode" (the doorway where the current enters). His motivation for changing it to something meaning 'the East electrode' (other candidates had been "eastode", "oriode" and "anatolode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the East electrode would not have been the 'way in' any more. Therefore, "eisode" would have become inappropriate, whereas "anode" meaning 'East electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the anode's function any more, but more importantly because as we now know, the Earth's magnetic field direction on which the "anode" term is based is subject to reversals whereas the current direction convention on which the "eisode" term was based has no reason to change in the future.
Since the later discovery of the electron, an easier to remember and more durably correct technically although historically false, etymology has been suggested: anode, from the Greek anodos, 'way up', 'the way (up) out of the cell (or other device) for electrons'.
Electrolytic anode
In electrochemistry, the anode is where oxidation occurs and is the positive polarity contact in an electrolytic cell. At the anode, anions (negative ions) are forced by the electrical potential to react chemically and give off electrons (oxidation) which then flow up and into the driving circuit. Mnemonics: LEO Red Cat (Loss of Electrons is Oxidation, Reduction occurs at the Cathode), or AnOx Red Cat (Anode Oxidation, Reduction Cathode), or OIL RIG (Oxidation is Loss, Reduction is Gain of electrons), or Roman Catholic and Orthodox (Reduction – Cathode, anode – Oxidation), or LEO the lion says GER (Losing electrons is Oxidation, Gaining electrons is Reduction).
This process is widely used in metals refining. For example, in copper refining, copper anodes, an intermediate product from the furnaces, are electrolysed in an appropriate solution (such as sulfuric acid) to yield high purity (99.99%) cathodes. Copper cathodes produced using this method are also described as electrolytic copper.
Historically, when non-reactive anodes were desired for electrolysis, graphite (called plumbago in Faraday's time) or platinum were chosen. They were found to be some of the least reactive materials for anodes. Platinum erodes very slowly compared to other materials, and graphite crumbles and can produce carbon dioxide in aqueous solutions but otherwise does not participate in the reaction.
Battery or galvanic cell anode
In a battery or galvanic cell, the anode is the negative electrode from which electrons flow out towards the external part of the circuit. Internally the positively charged cations are flowing away from the anode (even though it is negative and therefore would be expected to attract them, this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems); but, external to the cell in the circuit, electrons are being pushed out through the negative contact and thus through the circuit by the voltage potential as would be expected. Note: in a galvanic cell, contrary to what occurs in an electrolytic cell, no anions flow to the anode, the internal current being entirely accounted for by the cations flowing away from it (cf drawing).
Battery manufacturers may regard the negative electrode as the anode, particularly in their technical literature. Though technically incorrect, it does resolve the problem of which electrode is the anode in a secondary (or rechargeable) cell. Using the traditional definition, the anode switches ends between charge and discharge cycles.
Vacuum tube anode
In electronic vacuum devices such as a cathode-ray tube, the anode is the positively charged electron collector. In a tube, the anode is a charged positive plate that collects the electrons emitted by the cathode through electric attraction. It also accelerates the flow of these electrons.
Diode anode
In a semiconductor diode, the anode is the P-doped layer which initially supplies holes to the junction. In the junction region, the holes supplied by the anode combine with electrons supplied from the N-doped region, creating a depleted zone. As the P-doped layer supplies holes to the depleted region, negative dopant ions are left behind in the P-doped layer ('P' for positive charge-carrier ions). This creates a base negative charge on the anode. When a positive voltage is applied to anode of the diode from the circuit, more holes are able to be transferred to the depleted region, and this causes the diode to become conductive, allowing current to flow through the circuit. The terms anode and cathode should not be applied to a Zener diode, since it allows flow in either direction, depending on the polarity of the applied potential (i.e. voltage).
Sacrificial anode
In cathodic protection, a metal anode that is more reactive to the corrosive environment than the metal system to be protected is electrically linked to the protected system. As a result, the metal anode partially corrodes or dissolves instead of the metal system. As an example, an iron or steel ship's hull may be protected by a zinc sacrificial anode, which will dissolve into the seawater and prevent the hull from being corroded. Sacrificial anodes are particularly needed for systems where a static charge is generated by the action of flowing liquids, such as pipelines and watercraft. Sacrificial anodes are also generally used in tank-type water heaters.
In 1824 to reduce the impact of this destructive electrolytic action on ships hulls, their fastenings and underwater equipment, the scientist-engineer Humphry Davy developed the first and still most widely used marine electrolysis protection system. Davy installed sacrificial anodes made from a more electrically reactive (less noble) metal attached to the vessel hull and electrically connected to form a cathodic protection circuit.
A less obvious example of this type of protection is the process of galvanising iron. This process coats iron structures (such as fencing) with a coating of zinc metal. As long as the zinc remains intact, the iron is protected from the effects of corrosion. Inevitably, the zinc coating becomes breached, either by cracking or physical damage. Once this occurs, corrosive elements act as an electrolyte and the zinc/iron combination as electrodes. The resultant current ensures that the zinc coating is sacrificed but that the base iron does not corrode. Such a coating can protect an iron structure for a few decades, but once the protecting coating is consumed, the iron rapidly corrodes.
If, conversely, tin is used to coat steel, when a breach of the coating occurs it actually accelerates oxidation of the iron.
Impressed current anode
Another cathodic protection is used on the impressed current anode. It is made from titanium and covered with mixed metal oxide. Unlike the sacrificial anode rod, the impressed current anode does not sacrifice its structure. This technology uses an external current provided by a DC source to create the cathodic protection. Impressed current anodes are used in larger structures like pipelines, boats, and water heaters.
Related antonym
The opposite of an anode is a cathode. When the current through the device is reversed, the electrodes switch functions, so the anode becomes the cathode and the cathode becomes anode, as long as the reversed current is applied. The exception is diodes where electrode naming is always based on the forward current direction.
See also
Anodizing
Galvanic anode
Gas-filled tube
Primary cell
Redox (reduction–oxidation)
References
External links
The Cathode Ray Tube site
How to define anode and cathode
Valence Technologies Inc. battery education page
Cathodic Protection Technical Library
Electrodes
|
https://en.wikipedia.org/wiki/Adhesive
|
Adhesive, also known as glue, cement, mucilage, or paste, is any non-metallic substance applied to one or both surfaces of two separate items that binds them together and resists their separation.
The use of adhesives offers certain advantages over other binding techniques such as sewing, mechanical fastenings, or welding. These include the ability to bind different materials together, the more efficient distribution of stress across a joint, the cost-effectiveness of an easily mechanized process, and greater flexibility in design. Disadvantages of adhesive use include decreased stability at high temperatures, relative weakness in bonding large objects with a small bonding surface area, and greater difficulty in separating objects during testing. Adhesives are typically organized by the method of adhesion followed by reactive or non-reactive, a term which refers to whether the adhesive chemically reacts in order to harden. Alternatively, they can be organized either by their starting physical phase or whether their raw stock is of natural or synthetic origin.
Adhesives may be found naturally or produced synthetically. The earliest human use of adhesive-like substances was approximately 200,000 years ago, when Neanderthals produced tar from the dry distillation of birch bark for use in binding stone tools to wooden handles. The first references to adhesives in literature appeared in approximately 2000 BC. The Greeks and Romans made great contributions to the development of adhesives. In Europe, glue was not widely used until the period AD 1500–1700. From then until the 1900s increases in adhesive use and discovery were relatively gradual. Only since the 20th century has the development of synthetic adhesives accelerated rapidly, and innovation in the field continues to the present.
History
Evidence of the earliest known use of adhesives was discovered in central Italy when two stone flakes partially covered with birch-bark tar and a third uncovered stone from the Middle Pleistocene era (circa 200,000 years ago) were found. This is thought to be the oldest discovered human use of tar-hafted stones.
The birch-bark-tar adhesive is a simple, one-component adhesive. A study from 2019 showed that birch tar production can be a very simple process—merely involving the burning of birch bark near smooth vertical surfaces in open air conditions. Although sticky enough, plant-based adhesives are brittle and vulnerable to environmental conditions. The first use of compound adhesives was discovered in Sibudu, South Africa. Here, 70,000-year-old stone segments that were once inserted in axe hafts were discovered covered with an adhesive composed of plant gum and red ochre (natural iron oxide) as adding ochre to plant gum produces a stronger product and protects the gum from disintegrating under wet conditions. The ability to produce stronger adhesives allowed middle Stone Age humans to attach stone segments to sticks in greater variations, which led to the development of new tools.
More recent examples of adhesive use by prehistoric humans have been found at the burial sites of ancient tribes. Archaeologists studying the sites found that approximately 6,000 years ago the tribesmen had buried their dead together with food found in broken clay pots repaired with tree resins. Another investigation by archaeologists uncovered the use of bituminous cements to fasten ivory eyeballs to statues in Babylonian temples dating to approximately 4000 BC.
In 2000, a paper revealed the discovery of a 5,200-year-old man nicknamed the "Tyrolean Iceman" or "Ötzi", who was preserved in a glacier near the Austria-Italy border. Several of his belongings were found with him including two arrows with flint arrowheads and a copper hatchet, each with evidence of organic glue used to connect the stone or metal parts to the wooden shafts. The glue was analyzed as pitch, which requires the heating of tar during its production. The retrieval of this tar requires a transformation of birch bark by means of heat, in a process known as pyrolysis.
The first references to adhesives in literature appeared in approximately 2000 BC. Further historical records of adhesive use are found from the period spanning 1500–1000 BC. Artifacts from this period include paintings depicting wood gluing operations and a casket made of wood and glue in King Tutankhamun's tomb. Other ancient Egyptian artifacts employ animal glue for bonding or lamination. Such lamination of wood for bows and furniture is thought to have extended their life and was accomplished using casein (milk protein)-based glues. The ancient Egyptians also developed starch-based pastes for the bonding of papyrus to clothing and a plaster of Paris-like material made of calcined gypsum.
From AD 1 to 500 the Greeks and Romans made great contributions to the development of adhesives. Wood veneering and marquetry were developed, the production of animal and fish glues refined, and other materials utilized. Egg-based pastes were used to bond gold leaves, and incorporated various natural ingredients such as blood, bone, hide, milk, cheese, vegetables, and grains. The Greeks began the use of slaked lime as mortar while the Romans furthered mortar development by mixing lime with volcanic ash and sand. This material, known as pozzolanic cement, was used in the construction of the Roman Colosseum and Pantheon. The Romans were also the first people known to have used tar and beeswax as caulk and sealant between the wooden planks of their boats and ships.
In Central Asia, the rise of the Mongols in approximately AD 1000 can be partially attributed to the good range and power of the bows of Genghis Khan's hordes. These bows were made of a bamboo core, with horn on the belly (facing towards the archer) and sinew on the back, bound together with animal glue.
In Europe, glue fell into disuse until the period AD 1500–1700. At this time, world-renowned cabinet and furniture makers such as Thomas Chippendale and Duncan Phyfe began to use adhesives to hold their products together. In 1690, the first commercial glue plant was established in The Netherlands. This plant produced glues from animal hides. In 1750, the first British glue patent was issued for fish glue. The following decades of the next century witnessed the manufacture of casein glues in German and Swiss factories. In 1876, the first U.S. patent (number 183,024) was issued to the Ross brothers for the production of casein glue.
The first U.S. postage stamps used starch-based adhesives when issued in 1847. The first US patent (number 61,991) on dextrin (a starch derivative) adhesive was issued in 1867.
Natural rubber was first used as material for adhesives starting in 1830, which marked the starting point of the modern adhesive. In 1862, a British patent (number 3288) was issued for the plating of metal with brass by electrodeposition to obtain a stronger bond to rubber. The development of the automobile and the need for rubber shock mounts required stronger and more durable bonds of rubber and metal. This spurred the development of cyclized rubber treated in strong acids. By 1927, this process was used to produce solvent-based thermoplastic rubber cements for metal to rubber bonding.
Natural rubber-based sticky adhesives were first used on a backing by Henry Day (US Patent 3,965) in 1845. Later these kinds of adhesives were used in cloth backed surgical and electric tapes. By 1925, the pressure-sensitive tape industry was born.
Today, sticky notes, Scotch Tape, and other tapes are examples of pressure-sensitive adhesives (PSA).
A key step in the development of synthetic plastics was the introduction of a thermoset plastic known as Bakelite phenolic in 1910. Within two years, phenolic resin was applied to plywood as a coating varnish. In the early 1930s, phenolics gained importance as adhesive resins.
The 1920s, 1930s, and 1940s witnessed great advances in the development and production of new plastics and resins due to the First and Second World Wars. These advances greatly improved the development of adhesives by allowing the use of newly developed materials that exhibited a variety of properties. With changing needs and ever evolving technology, the development of new synthetic adhesives continues to the present. However, due to their low cost, natural adhesives are still more commonly used.
Types
Adhesives are typically organized by the method of adhesion. These are then organized into reactive and non-reactive adhesives, which refers to whether the adhesive chemically reacts in order to harden. Alternatively they can be organized by whether the raw stock is of natural, or synthetic origin, or by their starting physical phase.
By reactiveness
Non-reactive
Drying
There are two types of adhesives that harden by drying: solvent-based adhesives and polymer dispersion adhesives, also known as emulsion adhesives. Solvent-based adhesives are a mixture of ingredients (typically polymers) dissolved in a solvent. White glue, contact adhesives and rubber cements are members of the drying adhesive family. As the solvent evaporates, the adhesive hardens. Depending on the chemical composition of the adhesive, they will adhere to different materials to greater or lesser degrees.
Polymer dispersion adhesives are milky-white dispersions often based on polyvinyl acetate (PVAc). They are used extensively in the woodworking and packaging industries. They are also used with fabrics and fabric-based components, and in engineered products such as loudspeaker cones.
Pressure-sensitive
Pressure-sensitive adhesives (PSA) form a bond by the application of light pressure to marry the adhesive with the adherend. They are designed to have a balance between flow and resistance to flow. The bond forms because the adhesive is soft enough to flow (i.e., "wet") to the adherend. The bond has strength because the adhesive is hard enough to resist flow when stress is applied to the bond. Once the adhesive and the adherend are in close proximity, molecular interactions, such as van der Waals forces, become involved in the bond, contributing significantly to its ultimate strength.
PSAs are designed for either permanent or removable applications. Examples of permanent applications include safety labels for power equipment, foil tape for HVAC duct work, automotive interior trim assembly, and sound/vibration damping films. Some high performance permanent PSAs exhibit high adhesion values and can support kilograms of weight per square centimeter of contact area, even at elevated temperatures. Permanent PSAs may initially be removable (for example to recover mislabeled goods) and build adhesion to a permanent bond after several hours or days.
Removable adhesives are designed to form a temporary bond, and ideally can be removed after months or years without leaving residue on the adherend. Removable adhesives are used in applications such as surface protection films, masking tapes, bookmark and note papers, barcode labels, price marking labels, promotional graphics materials, and for skin contact (wound care dressings, EKG electrodes, athletic tape, analgesic and transdermal drug patches, etc.). Some removable adhesives are designed to repeatedly stick and unstick. They have low adhesion, and generally cannot support much weight. Pressure-sensitive adhesive is used in Post-it notes.
Pressure-sensitive adhesives are manufactured with either a liquid carrier or in 100% solid form. Articles are made from liquid PSAs by coating the adhesive and drying off the solvent or water carrier. They may be further heated to initiate a cross-linking reaction and increase molecular weight. 100% solid PSAs may be low viscosity polymers that are coated and then reacted with radiation to increase molecular weight and form the adhesive, or they may be high viscosity materials that are heated to reduce viscosity enough to allow coating, and then cooled to their final form. Major raw material for PSA's are acrylate-based polymers.
Contact
Contact adhesives are used in strong bonds with high shear-resistance like laminates, such as bonding Formica to a wooden counter, and in footwear, as in attaching outsoles to uppers. Natural rubber and polychloroprene (Neoprene) are commonly used contact adhesives. Both of these elastomers undergo strain crystallization.
Contact adhesives must be applied to both surfaces and allowed some time to dry before the two surfaces are pushed together. Some contact adhesives require as long as 24 hours to dry before the surfaces are to be held together. Once the surfaces are pushed together, the bond forms very quickly. It is usually not necessary to apply pressure for a long time, so there is less need for clamps.
Hot
Hot adhesives, also known as hot melt adhesives, are thermoplastics applied in molten form (in the 65–180 °C range) which solidify on cooling to form strong bonds between a wide range of materials. Ethylene-vinyl acetate-based hot-melts are particularly popular for crafts because of their ease of use and the wide range of common materials they can join. A glue gun (shown at right) is one method of applying hot adhesives. The glue gun melts the solid adhesive, then allows the liquid to pass through its barrel onto the material, where it solidifies.
Thermoplastic glue may have been invented around 1940 by Procter & Gamble as a solution to the problem that water-based adhesives, commonly used in packaging at that time, failed in humid climates, causing packages to open. However, water-based adhesives are still of strong interest as they typically do not contain volatile solvents.
Reactive
Anaerobic
Anaerobic adhesives cure when in contact with metal, in the absence of oxygen. They work well in a close-fitting space, as when used as a Thread-locking fluid.
Multi-part
Multi-component adhesives harden by mixing two or more components which chemically react. This reaction causes polymers to cross-link into acrylates, urethanes, and epoxies .
There are several commercial combinations of multi-component adhesives in use in industry. Some of these combinations are:
Polyester resin & polyurethane resin
Polyols & polyurethane resin
Acrylic polymers & polyurethane resins
The individual components of a multi-component adhesive are not adhesive by nature. The individual components react with each other after being mixed and show full adhesion only on curing. The multi-component resins can be either solvent-based or solvent-less. The solvents present in the adhesives are a medium for the polyester or the polyurethane resin. The solvent is dried during the curing process.
Pre-mixed and frozen adhesives
Pre-mixed and frozen adhesives (PMFs) are adhesives that are mixed, deaerated, packaged, and frozen. As it is necessary for PMFs to remain frozen before use, once they are frozen at −80 °C they are shipped with dry ice and are required to be stored at or below −40 °C. PMF adhesives eliminate mixing mistakes by the end user and reduce exposure of curing agents that can contain irritants or toxins. PMFs were introduced commercially in the 1960s and are commonly used in aerospace and defense.
One-part
One-part adhesives harden via a chemical reaction with an external energy source, such as radiation, heat, and moisture.
Ultraviolet (UV) light curing adhesives, also known as light curing materials (LCM), have become popular within the manufacturing sector due to their rapid curing time and strong bond strength. Light curing adhesives can cure in as little as one second and many formulations can bond dissimilar substrates (materials) and withstand harsh temperatures. These qualities make UV curing adhesives essential to the manufacturing of items in many industrial markets such as electronics, telecommunications, medical, aerospace, glass, and optical. Unlike traditional adhesives, UV light curing adhesives not only bond materials together but they can also be used to seal and coat products. They are generally acrylic-based.
Heat curing adhesives consist of a pre-made mixture of two or more components. When heat is applied the components react and cross-link. This type of adhesive includes thermoset epoxies, urethanes, and polyimides.
Moisture curing adhesives cure when they react with moisture present on the substrate surface or in the air. This type of adhesive includes cyanoacrylates and urethanes.
By origin
Natural
Natural adhesives are made from organic sources such as vegetable starch (dextrin), natural resins, or animals (e.g. the milk protein casein and hide-based animal glues). These are often referred to as bioadhesives.
One example is a simple paste made by cooking flour in water. Starch-based adhesives are used in corrugated board and paper sack production, paper tube winding, and wallpaper adhesives. Casein glue is mainly used to adhere glass bottle labels. Animal glues have traditionally been used in bookbinding, wood joining, and many other areas but now are largely replaced by synthetic glues except in specialist applications like the production and repair of stringed instruments. Albumen made from the protein component of blood has been used in the plywood industry. Masonite, a wood hardboard, was originally bonded using natural wood lignin, an organic polymer, though most modern particle boards such as MDF use synthetic thermosetting resins.
Synthetic
Synthetic adhesives are made out of organic compounds. Many are based on elastomers, thermoplastics, emulsions, and thermosets. Examples of thermosetting adhesives are: epoxy, polyurethane, cyanoacrylate and acrylic polymers. The first commercially produced synthetic adhesive was Karlsons Klister in the 1920s.
Application
Applicators of different adhesives are designed according to the adhesive being used and the size of the area to which the adhesive will be applied. The adhesive is applied to either one or both of the materials being bonded. The pieces are aligned and pressure is added to aid in adhesion and rid the bond of air bubbles.
Common ways of applying an adhesive include brushes, rollers, using films or pellets, spray guns and applicator guns (e.g., caulk gun). All of these can be used manually or automated as part of a machine.
Mechanisms of adhesion
For an adhesive to be effective it must have three main properties. Firstly, it must be able to wet the base material. Wetting is the ability of a liquid to maintain contact with a solid surface. It must also increase in strength after application, and finally it must be able to transmit load between the two surfaces/substrates being adhered.
Adhesion, the attachment between adhesive and substrate may occur either by mechanical means, in which the adhesive works its way into small pores of the substrate, or by one of several chemical mechanisms. The strength of adhesion depends on many factors, including the means by which it occurs.
In some cases, an actual chemical bond occurs between adhesive and substrate. In others, electrostatic forces, as in static electricity, hold the substances together. A third mechanism involves the van der Waals forces that develop between molecules. A fourth means involves the moisture-aided diffusion of the glue into the substrate, followed by hardening.
Methods to improve adhesion
The quality of adhesive bonding depends strongly on the ability of the adhesive to efficiently cover (wet) the substrate area. This happens when the surface energy of the substrate is greater than the surface energy of the adhesive. However, high-strength adhesives have high surface energy. Thus, they bond poorly to low-surface-energy polymers or other materials. To solve this problem, surface treatment can be used to increase the surface energy as a preparation step before adhesive bonding. Importantly, surface preparation provides a reproducible surface allowing consistent bonding results. The commonly used surface activation techniques include plasma activation, flame treatment and wet chemistry priming.
Failure
There are several factors that could contribute to the failure of two adhered surfaces. Sunlight and heat may weaken the adhesive. Solvents can deteriorate or dissolve adhesive. Physical stresses may also cause the separation of surfaces. When subjected to loading, debonding may occur at different locations in the adhesive joint. The major fracture types are the following:
Cohesive fracture
Cohesive fracture is obtained if a crack propagates in the bulk polymer which constitutes the adhesive. In this case the surfaces of both adherends after debonding will be covered by fractured adhesive. The crack may propagate in the center of the layer or near an interface. For this last case, the cohesive fracture can be said to be "cohesive near the interface".
Adhesive fracture
Adhesive fracture (sometimes referred to as interfacial fracture) is when debonding occurs between the adhesive and the adherend. In most cases, the occurrence of adhesive fracture for a given adhesive goes along with smaller fracture toughness.
Other types of fracture
Other types of fracture include:
The mixed type, which occurs if the crack propagates at some spots in a cohesive and in others in an interfacial manner. Mixed fracture surfaces can be characterised by a certain percentage of adhesive and cohesive areas.
The alternating crack path type which occurs if the cracks jump from one interface to the other. This type of fracture appears in the presence of tensile pre-stresses in the adhesive layer.
Fracture can also occur in the adherend if the adhesive is tougher than the adherend. In this case, the adhesive remains intact and is still bonded to one substrate and remnants of the other. For example, when one removes a price label, the adhesive usually remains on the label and the surface. This is cohesive failure. If, however, a layer of paper remains stuck to the surface, the adhesive has not failed. Another example is when someone tries to pull apart Oreo cookies and all the filling remains on one side; this is an adhesive failure, rather than a cohesive failure.
Design of adhesive joints
As a general design rule, the material properties of the object need to be greater than the forces anticipated during its use. (i.e. geometry, loads, etc.). The engineering work will consist of having a good model to evaluate the function. For most adhesive joints, this can be achieved using fracture mechanics. Concepts such as the stress concentration factor and the strain energy release rate can be used to predict failure. In such models, the behavior of the adhesive layer itself is neglected and only the adherents are considered.
Failure will also very much depend on the opening mode of the joint.
Mode I is an opening or tensile mode where the loadings are normal to the crack.
Mode II is a sliding or in-plane shear mode where the crack surfaces slide over one another in direction perpendicular to the leading edge of the crack. This is typically the mode for which the adhesive exhibits the highest resistance to fracture.
Mode III is a tearing or antiplane shear mode.
As the loads are usually fixed, an acceptable design will result from combination of a material selection procedure and geometry modifications, if possible. In adhesively bonded structures, the global geometry and loads are fixed by structural considerations and the design procedure focuses on the material properties of the adhesive and on local changes on the geometry.
Increasing the joint resistance is usually obtained by designing its geometry so that:
The bonded zone is large
It is mainly loaded in mode II
Stable crack propagation will follow the appearance of a local failure.
Shelf life
Some glues and adhesives have a limited shelf life. Shelf life is dependent on multiple factors, the foremost of which being temperature. Adhesives may lose their effectiveness at high temperatures, as well as become increasingly stiff. Other factors affecting shelf life include exposure to oxygen or water vapor.
See also
Impact glue
References
Bibliography
Kinloch, Anthony J. (1987). Adhesion and Adhesives: Science and Technology. London: Chapman and Hall.
External links
Educational portal on adhesives and sealants
RoyMech: The theory of adhesive bonding
3M's Adhesive & Tapes Classification
Database of adhesives for attaching different materials
Visual arts materials
1750 introductions
Packaging materials
|
https://en.wikipedia.org/wiki/AMD
|
Advanced Micro Devices, Inc., commonly abbreviated as AMD, is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets.
The company was founded in 1969 by Jerry Sanders and a group of other technology professionals. AMD's early products were primarily memory chips and other components for computers. The company later expanded into the microprocessor market, competing with Intel, its main rival in the industry. In the early 2000s, AMD experienced significant growth and success, thanks in part to its strong position in the PC market and the success of its Athlon and Opteron processors. However, the company faced challenges in the late 2000s and early 2010s, as it struggled to keep up with Intel in the race to produce faster and more powerful processors. In the late 2010s, AMD regained some of its market share thanks to the success of its Ryzen processors which are now widely regarded as superior to Intel products in business applications including cloud applications. AMD's processors are used in a wide range of computing devices, including personal computers, servers, laptops, and gaming consoles. While it initially manufactured its own processors, the company later outsourced its manufacturing, a practice known as going fabless, after GlobalFoundries was spun off in 2009.
AMD's main products include microprocessors, motherboard chipsets, embedded processors, graphics processors, and FPGAs for servers, workstations, personal computers, and embedded system applications. The company has also expanded into new markets, such as the data center and gaming markets, and has announced plans to enter the high-performance computing market.
History
First twelve years
Advanced Micro Devices was formally incorporated by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor, on May 1, 1969. Sanders, an electrical engineer who was the director of marketing at Fairchild, had, like many Fairchild executives, grown frustrated with the increasing lack of support, opportunity, and flexibility within the company. He later decided to leave to start his own semiconductor company, following the footsteps of Robert Noyce (developer of the first silicon integrated circuit at Fairchild in 1959) and Gordon Moore, who together founded the semiconductor company Intel in July 1968.
In September 1969, AMD moved from its temporary location in Santa Clara to Sunnyvale, California. To immediately secure a customer base, AMD initially became a second source supplier of microchips designed by Fairchild and National Semiconductor. AMD first focused on producing logic chips. The company guaranteed quality control to United States Military Standard, an advantage in the early computer industry since unreliability in microchips was a distinct problem that customers – including computer manufacturers, the telecommunications industry, and instrument manufacturers – wanted to avoid.
In November 1969, the company manufactured its first product: the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter, which was highly successful. Its bestselling product in 1971 was the Am2505, the fastest multiplier available.
In 1971, AMD entered the RAM chip market, beginning with the Am3101, a 64-bit bipolar RAM. That year AMD also greatly increased the sales volume of its linear integrated circuits, and by year-end the company's total annual sales reached US$4.6 million.
AMD went public in September 1972. The company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers. By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102 (a static N-channel 1024-bit RAM) and three low-power Schottky MSI circuits: Am25LS07, Am25LS08, and Am25LS09.
Intel had created the first microprocessor, its 4-bit 4004, in 1971. By 1975, AMD entered the microprocessor market with the Am9080, a reverse-engineered clone of the Intel 8080, and the Am2900 bit-slice microprocessor family. When Intel began installing microcode in its microprocessors in 1976, it entered into a cross-licensing agreement with AMD, which was granted a copyright license to the microcode in its microprocessors and peripherals, effective October 1976.
In 1977, AMD entered into a joint venture with Siemens, a German engineering conglomerate wishing to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens' stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors.
Total sales in fiscal year 1978 topped $100 million, and in 1979, AMD debuted on the New York Stock Exchange. In 1979, production also began on AMD's new semiconductor fabrication plant in Austin, Texas; the company already had overseas assembly facilities in Penang and Manila, and began construction on a fabrication plant in San Antonio in 1981. In 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation.
Technology exchange agreement with Intel
Intel had introduced the first x86 microprocessors in 1978. In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982. The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995. The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice. The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips. However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively canceled.
Beginning in 1982, AMD began volume-producing second-source Intel-licensed 8086, 8088, 80186, and 80188 processors, and by 1984, its own Am286 clone of Intel's 80286 processor, for the rapidly growing market of IBM PCs and IBM clones. It also continued its successful concentration on proprietary bipolar chips.
The company continued to spend greatly on research and development, and created the world's first 512K EPROM in 1984. That year, AMD was listed in the book The 100 Best Companies to Work for in America, and later made the Fortune 500 list for the first time in 1985.
By mid-1985, the microchip market experienced a severe downturn, mainly due to long-term aggressive trade practices (dumping) from Japan, but also due to a crowded and non-innovative chip market in the United States. AMD rode out the mid-1980s crisis by aggressively innovating and modernizing, devising the Liberty Chip program of designing and manufacturing one new chip or chipset per week for 52 weeks in fiscal year 1986, and by heavily lobbying the U.S. government until sanctions and restrictions were put in place to prevent predatory Japanese pricing. During this time, AMD withdrew from the DRAM market, and made some headway into the CMOS market, which it had lagged in entering, having focused instead on bipolar chips.
AMD had some success in the mid-1980s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multi-standard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. Beginning in 1986, AMD embraced the perceived shift toward RISC with their own AMD Am29000 (29k) processor; the 29k survived as an embedded processor. The company also increased its EPROM memory market share in the late 1980s. Throughout the 1980s, AMD was a second-source supplier of Intel x86 processors. In 1991, it introduced its own 386-compatible Am386, an AMD-designed chip. Creating its own chips, AMD began to compete directly with Intel.
AMD had a large, successful flash memory business, even during the dotcom bust. In 2003, to divest some manufacturing and aid its overall cash flow, which was under duress from aggressive microprocessor competition from Intel, AMD spun off its flash memory business and manufacturing into Spansion, a joint venture with Fujitsu, which had been co-manufacturing flash memory with AMD since 1993. In December 2005, AMD divested itself of Spansion to focus on the microprocessor market, and Spansion went public in an IPO.
Acquisition of ATI, spin-off of GlobalFoundries, and acquisition of Xilinx
On July 24, 2006, AMD announced its acquisition of the Canadian 3D graphics card company ATI Technologies. AMD paid $4.3 billion and 58 million shares of its capital stock, for a total of approximately $5.4 billion. The transaction was completed on October 25, 2006. On August 30, 2010, AMD announced that it would retire the ATI brand name for its graphics chipsets in favor of the AMD brand name.
In October 2008, AMD announced plans to spin off manufacturing operations in the form of GlobalFoundries Inc., a multibillion-dollar joint venture with Advanced Technology Investment Co., an investment company formed by the government of Abu Dhabi. The partnership and spin-off gave AMD an infusion of cash and allowed it to focus solely on chip design. To assure the Abu Dhabi investors of the new venture's success, AMD's CEO Hector Ruiz stepped down in July 2008, while remaining executive chairman, in preparation for becoming chairman of GlobalFoundries in March 2009. President and COO Dirk Meyer became AMD's CEO. Recessionary losses necessitated AMD cutting 1,100 jobs in 2009.
In August 2011, AMD announced that former Lenovo executive Rory Read would be joining the company as CEO, replacing Meyer. In November 2011, AMD announced plans to lay off more than 10% (1,400) of its employees from across all divisions worldwide. In October 2012, it announced plans to lay off an additional 15% of its workforce to reduce costs in the face of declining sales revenue.
AMD acquired the low-power server manufacturer SeaMicro in early 2012, with an eye to bringing out an Arm64 server chip.
On October 8, 2014, AMD announced that Rory Read had stepped down after three years as president and chief executive officer. He was succeeded by Lisa Su, a key lieutenant who had been serving as chief operating officer since June.
On October 16, 2014, AMD announced a new restructuring plan along with its Q3 results. Effective July 1, 2014, AMD reorganized into two business groups: Computing and Graphics, which primarily includes desktop and notebook processors and chipsets, discrete GPUs, and professional graphics; and Enterprise, Embedded, and Semi-Custom, which primarily includes server and embedded processors, dense servers, semi-custom SoC products (including solutions for gaming consoles), engineering services, and royalties. As part of this restructuring, AMD announced that 7% of its global workforce would be laid off by the end of 2014.
After the GlobalFoundries spin-off and subsequent layoffs, AMD was left with significant vacant space at 1 AMD Place, its aging Sunnyvale headquarters office complex. In August 2016, AMD's 47 years in Sunnyvale came to a close when it signed a lease with the Irvine Company for a new 220,000 sq. ft. headquarters building in Santa Clara. AMD's new location at Santa Clara Square faces the headquarters of archrival Intel across the Bayshore Freeway and San Tomas Aquino Creek. Around the same time, AMD also agreed to sell 1 AMD Place to the Irvine Company. In April 2019, the Irvine Company secured approval from the Sunnyvale City Council of its plans to demolish 1 AMD Place and redevelop the entire 32-acre site into townhomes and apartments.
In October 2020, AMD announced that it was acquiring Xilinx in an all-stock transaction. The acquisition was completed in February 2022, with an estimated acquisition price of $50 billion.
In October 2023, AMD acquired an open-source AI software provider, Nod.ai, to bolster its AI software ecosystem.
List of CEOs
Products
CPUs and APUs
IBM PC and the x86 architecture
In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but its policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement. In 1984, Intel internally decided to no longer cooperate with AMD in supplying product information to shore up its advantage in the marketplace, and delayed and eventually refused to convey the technical details of the Intel 80386. In 1987, AMD invoked arbitration over the issue, and Intel reacted by canceling the 1982 technological-exchange agreement altogether. After three years of testimony, AMD eventually won in arbitration in 1992, but Intel disputed this decision. Another long legal dispute followed, ending in 1994 when the Supreme Court of California sided with the arbitrator and AMD.
In 1990, Intel countersued AMD, renegotiating AMD's right to use derivatives of Intel's microcode for its cloned processors. In the face of uncertainty during the legal dispute, AMD was forced to develop clean room designed versions of Intel code for its x386 and x486 processors, the former long after Intel had released its own x386 in 1985. In March 1991, AMD released the Am386, its clone of the Intel 386 processor. By October of the same year it had sold one million units.
In 1993, AMD introduced the first of the Am486 family of processors, which proved popular with a large number of original equipment manufacturers, including Compaq, which signed an exclusive agreement using the Am486. The Am5x86, another Am486-based processor, was released in November 1995, and continued AMD's success as a fast, cost-effective processor.
Finally, in an agreement effective 1996, AMD received the rights to the microcode in Intel's x386 and x486 processor families, but not the rights to the microcode in the following generations of processors.
K5, K6, Athlon, Duron, and Sempron
AMD's first in-house x86 processor was the K5, launched in 1996. The "K" in its name was a reference to Kryptonite, the only substance known to harm comic book character Superman. This itself was a reference to Intel's hegemony over the market, i.e., an anthropomorphization of them as Superman. The number "5" was a reference to the fifth generation of x86 processors; rival Intel had previously introduced its line of fifth-generation x86 processors as Pentium because the U.S. Trademark and Patent Office had ruled that mere numbers could not be trademarked.
In 1996, AMD purchased NexGen, specifically for the rights to their Nx series of x86-compatible processors. AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was the K6 processor, introduced in 1997. Although it was based on Socket 7, variants such as K6-III/450 were faster than Intel's Pentium II (sixth-generation processor).
The K7 was AMD's seventh-generation x86 processor, making its debut under the brand name Athlon on June 23, 1999. Unlike previous AMD processors, it could not be used on the same motherboards as Intel's, due to licensing issues surrounding Intel's Slot 1 connector, and instead used a Slot A connector, referenced to the Alpha processor bus. The Duron was a lower-cost and limited version of the Athlon (64KB instead of 256KB L2 cache) in a 462-pin socketed PGA (socket A) or soldered directly onto the motherboard. Sempron was released as a lower-cost Athlon XP, replacing Duron in the socket A PGA era. It has since been migrated upward to all new sockets, up to AM3.
On October 9, 2001, the Athlon XP was released. On February 10, 2003, the Athlon XP with 512KB L2 Cache was released.
Athlon 64, Opteron and Phenom
The K8 was a major revision of the K7 architecture, with the most notable features being the addition of a 64-bit extension to the x86 instruction set (called x86-64, AMD64, or x64), the incorporation of an on-chip memory controller, and the implementation of an extremely high-performance point-to-point interconnect called HyperTransport, as part of the Direct Connect Architecture. The technology was initially launched as the Opteron server-oriented processor on April 22, 2003. Shortly thereafter, it was incorporated into a product for desktop PCs, branded Athlon 64.
On April 21, 2005, AMD released the first dual-core Opteron, an x86-based server CPU. A month later, it released the Athlon 64 X2, the first desktop-based dual-core processor family. In May 2007, AMD abandoned the string "64" in its dual-core desktop product branding, becoming Athlon X2, downplaying the significance of 64-bit computing in its processors. Further updates involved improvements to the microarchitecture, and a shift of the target market from mainstream desktop systems to value dual-core desktop systems. In 2008, AMD started to release dual-core Sempron processors exclusively in China, branded as the Sempron 2000 series, with lower HyperTransport speed and smaller L2 cache. AMD completed its dual-core product portfolio for each market segment.
In September 2007, AMD released the first server Opteron K10 processors, followed in November by the Phenom processor for desktop. K10 processors came in dual-core, triple-core, and quad-core versions, with all cores on a single die. AMD released a new platform codenamed "Spider", which used the new Phenom processor, as well as an R770 GPU and a 790 GX/FX chipset from the AMD 700 chipset series. However, AMD built the Spider at 65nm, which was uncompetitive with Intel's smaller and more power-efficient 45nm.
In January 2009, AMD released a new processor line dubbed Phenom II, a refresh of the original Phenom built using the 45 nm process. AMD's new platform, codenamed "Dragon", used the new Phenom II processor, and an ATI R770 GPU from the R700 GPU family, as well as a 790 GX/FX chipset from the AMD 700 chipset series. The Phenom II came in dual-core, triple-core and quad-core variants, all using the same die, with cores disabled for the triple-core and dual-core versions. The Phenom II resolved issues that the original Phenom had, including a low clock speed, a small L3 cache, and a Cool'n'Quiet bug that decreased performance. The Phenom II cost less but was not performance-competitive with Intel's mid-to-high-range Core 2 Quads. The Phenom II also enhanced its predecessor's memory controller, allowing it to use DDR3 in a new native socket AM3, while maintaining backward compatibility with AM2+, the socket used for the Phenom, and allowing the use of the DDR2 memory that was used with the platform.
In April 2010, AMD released a new Phenom II Hexa-core (6-core) processor codenamed "Thuban". This was a totally new die based on the hexa-core "Istanbul" Opteron processor. It included AMD's "turbo core" technology, which allows the processor to automatically switch from 6 cores to 3 faster cores when more pure speed is needed.
The Magny Cours and Lisbon server parts were released in 2010. The Magny Cours part came in 8 to 12 cores and the Lisbon part in 4 and 6 core parts. Magny Cours is focused on performance while the Lisbon part is focused on high performance per watt. Magny Cours is an MCM (multi-chip module) with two hexa-core "Istanbul" Opteron parts. This will use a new socket G34 for dual and quad-socket processors and thus will be marketed as Opteron 61xx series processors. Lisbon uses socket C32 certified for dual-socket use or single socket use only and thus will be marketed as Opteron 41xx processors. Both will be built on a 45 nm SOI process.
Fusion becomes the AMD APU
Following AMD's 2006 acquisition of Canadian graphics company ATI Technologies, an initiative codenamed Fusion was announced to integrate a CPU and GPU together on some of AMD's microprocessors, including a built in PCI Express link to accommodate separate PCI Express peripherals, eliminating the northbridge chip from the motherboard. The initiative intended to move some of the processing originally done on the CPU (e.g. floating-point unit operations) to the GPU, which is better optimized for some calculations. The Fusion was later renamed the AMD APU (Accelerated Processing Unit).
Llano was AMD's first APU built for laptops. Llano was the second APU released, targeted at the mainstream market. It incorporated a CPU and GPU on the same die, as well as northbridge functions, and used "Socket FM1" with DDR3 memory. The CPU part of the processor was based on the Phenom II "Deneb" processor. AMD suffered an unexpected decrease in revenue based on production problems for the Llano. More AMD APUs for laptops running Windows 7 and Windows 8 OS are being used commonly. These include AMD's price-point APUs, the E1 and E2, and their mainstream competitors with Intel's Core i-series: The Vision A- series, the A standing for accelerated. These range from the lower-performance A4 chipset to the A6, A8, and A10. These all incorporate next-generation Radeon graphics cards, with the A4 utilizing the base Radeon HD chip and the rest using a Radeon R4 graphics card, with the exception of the highest-model A10 (A10-7300) which uses an R6 graphics card.
New microarchitectures
High-power, high-performance Bulldozer cores
Bulldozer was AMD's microarchitecture codename for server and desktop AMD FX processors, first released on October 12, 2011. This family 15h microarchitecture is the successor to the family 10h (K10) microarchitecture design. Bulldozer was a clean-sheet design, not a development of earlier processors. The core was specifically aimed at 10–125 W TDP computing products. AMD claimed dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores. While hopes were high that Bulldozer would bring AMD to be performance-competitive with Intel once more, most benchmarks were disappointing. In some cases the new Bulldozer products were slower than the K10 models they were built to replace.
The Piledriver microarchitecture was the 2012 successor to Bulldozer, increasing clock speeds and performance relative to its predecessor. Piledriver would be released in AMD FX, APU, and Opteron product lines. Piledriver was subsequently followed by the Steamroller microarchitecture in 2013. Used exclusively in AMD's APUs, Steamroller focused on greater parallelism.
In 2015, the Excavator microarchitecture replaced Piledriver. Expected to be the last microarchitecture of the Bulldozer series, Excavator focused on improved power efficiency.
Low-power Cat cores
The Bobcat microarchitecture was revealed during a speech from AMD executive vice-president Henri Richard in Computex 2007 and was put into production during the first quarter of 2011. Based on the difficulty competing in the x86 market with a single core optimized for the 10–100 W range, AMD had developed a simpler core with a target range of 1–10 watts. In addition, it was believed that the core could migrate into the hand-held space if the power consumption can be reduced to less than 1 W.
Jaguar is a microarchitecture codename for Bobcat's successor, released in 2013, that is used in various APUs from AMD aimed at the low-power/low-cost market. Jaguar and its derivates would go on to be used in the custom APUs of the PlayStation 4, Xbox One, PlayStation 4 Pro, Xbox One S, and Xbox One X. Jaguar would be later followed by the Puma microarchitecture in 2014.
ARM architecture-based designs
In 2012, AMD announced it was working on ARM products, both as a semi-custom product and server product. The initial server product was announced as the Opteron A1100 in 2014, an 8-core Cortex-A57 based ARMv8-A SoC, and was expected to be followed by an APU incorporating a Graphics Core Next GPU. However, the Opteron A1100 was not released until 2016, with the delay attributed to adding software support. The A1100 was also criticized for not having support from major vendors upon its release.
In 2014, AMD also announced the K12 custom core for release in 2016. While being ARMv8-A instruction set architecture compliant, the K12 was expected to be entirely custom-designed, targeting the server, embedded, and semi-custom markets. While ARM architecture development continued, products based on K12 were subsequently delayed with no release planned. Development of AMD's x86-based Zen microarchitecture was preferred.
Zen-based CPUs and APUs
Zen is a new architecture for x86-64 based Ryzen series of CPUs and APUs, introduced in 2017 by AMD and built from the ground up by a team led by Jim Keller, beginning with his arrival in 2012, and taping out before his departure in September 2015. One of AMD's primary goals with Zen was an IPC increase of at least 40%, however in February 2017 AMD announced that they had actually achieved a 52% increase. Processors made on the Zen architecture are built on the 14 nm FinFET node and have a renewed focus on single-core performance and HSA compatibility. Previous processors from AMD were either built in the 32 nm process ("Bulldozer" and "Piledriver" CPUs) or the 28 nm process ("Steamroller" and "Excavator" APUs). Because of this, Zen is much more energy efficient. The Zen architecture is the first to encompass CPUs and APUs from AMD built for a single socket (Socket AM4). Also new for this architecture is the implementation of simultaneous multithreading (SMT) technology, something Intel has had for years on some of their processors with their proprietary hyper-threading implementation of SMT. This is a departure from the "Clustered MultiThreading" design introduced with the Bulldozer architecture. Zen also has support for DDR4 memory. AMD released the Zen-based high-end Ryzen 7 "Summit Ridge" series CPUs on March 2, 2017, mid-range Ryzen 5 series CPUs on April 11, 2017, and entry level Ryzen 3 series CPUs on July 27, 2017. AMD later released the Epyc line of Zen derived server processors for 1P and 2P systems. In October 2017, AMD released Zen-based APUs as Ryzen Mobile, incorporating Vega graphics cores. In January 2018 AMD has announced their new lineup plans, with Ryzen 2. AMD launched CPUs with the 12nm Zen+ microarchitecture in April 2018, following up with the 7nm Zen 2 microarchitecture in June 2019, including an update to the Epyc line with new processors using the Zen 2 microarchitecture in August 2019, and Zen 3 slated for release in Q3 2020. As of 2019, AMD's Ryzen processors were reported to outsell Intel's consumer desktop processors. At CES 2020 AMD announced their Ryzen Mobile 4000, as the first 7 nm x86 mobile processor, the first 7 nm 8-core (also 16-thread) high-performance mobile processor, and the first 8-core (also 16-thread) processor for ultrathin laptops. This generation is still based on the Zen 2 architecture. In October 2020, AMD announced new processors based on the Zen 3 architecture. On PassMark's Single thread performance test the Ryzen 5 5600x bested all other CPUs besides the Ryzen 9 5950X. In August 2022, AMD announced their initial lineup of CPUs based on the new Zen 4 architecture.
The Steam Deck, PlayStation 5, Xbox Series X and Series S all use chips based on the Zen 2 microarchitecture, with proprietary tweaks and different configurations in each system's implementation than AMD sells in its own commercially available APUs.
Graphics products and GPUs
ATI prior to AMD acquisition
Radeon within AMD
In 2008, the ATI division of AMD released the TeraScale microarchitecture implementing a unified shader model. This design replaced the previous fixed-function hardware of previous graphics cards with multipurpose, programmable shaders. Initially released as part of the GPU for the Xbox 360, this technology would go on to be used in Radeon branded HD 2000 parts. Three generations of TeraScale would be designed and used in parts from 2008 to 2014.
Combined GPU and CPU divisions
In a 2009 restructuring, AMD merged the CPU and GPU divisions to support the company's APUs, which fused both graphics and general purpose processing. In 2011, AMD released the successor to TeraScale, Graphics Core Next (GCN). This new microarchitecture emphasized GPGPU compute capability in addition to graphics processing, with a particular aim of supporting heterogeneous computing on AMD's APUs. GCN's reduced instruction set ISA allowed for significantly increased compute capability over TeraScale's very long instruction word ISA. Since GCN's introduction with the HD 7970, five generations of the GCN architecture have been produced from 2008 through at least 2017.
Radeon Technologies Group
In September 2015, AMD separated the graphics technology division of the company into an independent internal unit called the Radeon Technologies Group (RTG) headed by Raja Koduri. This gave the graphics division of AMD autonomy in product design and marketing. The RTG then went on to create and release the Polaris and Vega microarchitectures released in 2016 and 2017, respectively. In particular the Vega, or fifth generation GCN, microarchitecture includes a number of major revisions to improve performance and compute capabilities.
In November 2017, Raja Koduri left RTG and CEO and President Lisa Su took his position. In January 2018, it was reported that two industry veterans joined RTG, namely Mike Rayfield as senior vice president and general manager of RTG, and David Wang as senior vice president of engineering for RTG. In January 2020, AMD announced that its second generation RDNA graphics architecture was in development, with the aim of competing with the Nvidia RTX graphics products for performance leadership. In October 2020, AMD announced their new RX 6000 series series GPUs, their first high-end product based on RDNA2 and capable of handling ray-tracing natively, aiming to challenge Nvidia's RTX 3000 GPUs.
Semi-custom and game console products
In 2012, AMD's then CEO Rory Read began a program to offer semi-custom designs. Rather than AMD simply designing and offering a single product, potential customers could work with AMD to design a custom chip based on AMD's intellectual property. Customers pay a non-recurring engineering fee for design and development, and a purchase price for the resulting semi-custom products. In particular, AMD noted their unique position of offering both x86 and graphics intellectual property. These semi-custom designs would have design wins as the APUs in the PlayStation 4 and Xbox One and the subsequent PlayStation 4 Pro, Xbox One S, Xbox One X, Xbox Series X/S, and PlayStation 5. Financially, these semi-custom products would represent a majority of the company's revenue in 2016. In November 2017, AMD and Intel announced that Intel would market a product combining in a single package an Intel Core CPU, a semi-custom AMD Radeon GPU, and HBM2 memory.
Other hardware
AMD motherboard chipsets
Before the launch of Athlon 64 processors in 2003, AMD designed chipsets for their processors spanning the K6 and K7 processor generations. The chipsets include the AMD-640, AMD-751, and the AMD-761 chipsets. The situation changed in 2003 with the release of Athlon 64 processors, and AMD chose not to further design its own chipsets for its desktop processors while opening the desktop platform to allow other firms to design chipsets. This was the "Open Platform Management Architecture" with ATI, VIA and SiS developing their own chipset for Athlon 64 processors and later Athlon 64 X2 and Athlon 64 FX processors, including the Quad FX platform chipset from Nvidia.
The initiative went further with the release of Opteron server processors as AMD stopped the design of server chipsets in 2004 after releasing the AMD-8111 chipset, and again opened the server platform for firms to develop chipsets for Opteron processors. As of today, Nvidia and Broadcom are the sole designing firms of server chipsets for Opteron processors.
As the company completed the acquisition of ATI Technologies in 2006, the firm gained the ATI design team for chipsets which previously designed the Radeon Xpress 200 and the Radeon Xpress 3200 chipsets. AMD then renamed the chipsets for AMD processors under AMD branding (for instance, the CrossFire Xpress 3200 chipset was renamed as AMD 580X CrossFire chipset). In February 2007, AMD announced the first AMD-branded chipset since 2004 with the release of the AMD 690G chipset (previously under the development codename RS690), targeted at mainstream IGP computing. It was the industry's first to implement a HDMI 1.2 port on motherboards, shipping for more than a million units. While ATI had aimed at releasing an Intel IGP chipset, the plan was scrapped and the inventories of Radeon Xpress 1250 (codenamed RS600, sold under ATI brand) was sold to two OEMs, Abit and ASRock. Although AMD stated the firm would still produce Intel chipsets, Intel had not granted the license of FSB to ATI.
On November 15, 2007, AMD announced a new chipset series portfolio, the AMD 7-Series chipsets, covering from the enthusiast multi-graphics segment to the value IGP segment, to replace the AMD 480/570/580 chipsets and AMD 690 series chipsets, marking AMD's first enthusiast multi-graphics chipset. Discrete graphics chipsets were launched on November 15, 2007, as part of the codenamed Spider desktop platform, and IGP chipsets were launched at a later time in spring 2008 as part of the codenamed Cartwheel platform.
AMD returned to the server chipsets market with the AMD 800S series server chipsets. It includes support for up to six SATA 6.0 Gbit/s ports, the C6 power state, which is featured in Fusion processors and AHCI 1.2 with SATA FIS–based switching support. This is a chipset family supporting Phenom processors and Quad FX enthusiast platform (890FX), IGP (890GX).
With the advent of AMD's APUs in 2011, traditional northbridge features such as the connection to graphics and the PCI Express controller were incorporated into the APU die. Accordingly, APUs were connected to a single chip chipset, renamed the Fusion Controller Hub (FCH), which primarily provided southbridge functionality.
AMD released new chipsets in 2017 to support the release of their new Ryzen products. As the Zen microarchitecture already includes much of the northbridge connectivity, the AM4-based chipsets primarily varied in the number of additional PCI Express lanes, USB connections, and SATA connections available. These AM4 chipsets were designed in conjunction with ASMedia.
Embedded products
Embedded CPUs
In the early 1990s, AMD began marketing a series of embedded System-on-a-chip (SoC) called AMD Élan, starting with the SC300 and SC310. Both combines a 32-Bit, Am386SX, low-voltage 25 MHz or 33 MHz CPU with memory controller, PC/AT peripheral controllers, real-time clock, PLL clock generators and ISA bus interface. The SC300 integrates in addition two PC card slots and a CGA-compatible LCD controller. They were followed in 1996 by the SC4xx types. Now supporting VESA Local Bus and using the Am486 with up to 100 MHz clock speed. A SC450 with 33 MHze.g. was used in the Nokia 9000 Communicator. In 1999 the SC520 was announced. Using an Am586 with 100 MHz or 133 MHz and supporting SDRAM and PCI it was the latest member of the series.
In February 2002, AMD acquired Alchemy Semiconductor for its Alchemy line of MIPS processors for the hand-held and portable media player markets. On June 13, 2006, AMD officially announced that the line was to be transferred to Raza Microelectronics, Inc., a designer of MIPS processors for embedded applications.
In August 2003, AMD also purchased the Geode business which was originally the Cyrix MediaGX from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of fanless processors and , and processor with fan, of TDP 25 W. This technology is used in a variety of embedded systems (Casino slot machines and customer kiosks for instance), several UMPC designs in Asia markets, as well as the OLPC XO-1 computer, an inexpensive laptop computer intended to be distributed to children in developing countries around the world. The Geode LX processor was announced in 2005 and is said will continue to be available through 2015.
AMD has also introduced 64-bit processors into its embedded product line starting with the AMD Opteron processor. Leveraging the high throughput enabled through HyperTransport and the Direct Connect Architecture these server-class processors have been targeted at high-end telecom and storage applications. In 2007, AMD added the AMD Athlon, AMD Turion, and Mobile AMD Sempron processors to its embedded product line. Leveraging the same 64-bit instruction set and Direct Connect Architecture as the AMD Opteron but at lower power levels, these processors were well suited to a variety of traditional embedded applications. Throughout 2007 and into 2008, AMD has continued to add both single-core Mobile AMD Sempron and AMD Athlon processors and dual-core AMD Athlon X2 and AMD Turion processors to its embedded product line and now offers embedded 64-bit solutions starting with 8 W TDP Mobile AMD Sempron and AMD Athlon processors for fan-less designs up to multi-processor systems leveraging multi-core AMD Opteron processors all supporting longer than standard availability.
The ATI acquisition in 2006 included the Imageon and Xilleon product lines. In late 2008, the entire handheld division was sold off to Qualcomm, who have since produced the Adreno series. Also in 2008, the Xilleon division was sold to Broadcom.
In April 2007, AMD announced the release of the M690T integrated graphics chipset for embedded designs. This enabled AMD to offer complete processor and chipset solutions targeted at embedded applications requiring high-performance 3D and video such as emerging digital signage, kiosk, and Point of Sale applications. The M690T was followed by the M690E specifically for embedded applications which removed the TV output, which required Macrovision licensing for OEMs, and enabled native support for dual TMDS outputs, enabling dual independent DVI interfaces.
In January 2011, AMD announced the AMD Embedded G-Series Accelerated Processing Unit. This was the first APU for embedded applications. These were followed by updates in 2013 and 2016.
In May 2012, AMD Announced the AMD Embedded R-Series Accelerated Processing Unit. This family of products incorporates the Bulldozer CPU architecture, and Discrete-class Radeon HD 7000G Series graphics. This was followed by a system on a chip (SoC) version in 2015 which offered a faster CPU and faster graphics, with support for DDR4 SDRAM memory.
Embedded graphics
AMD builds graphic processors for use in embedded systems. They can be found in anything from casinos to healthcare, with a large portion of products being used in industrial machines. These products include a complete graphics processing device in a compact multi-chip module including RAM and the GPU. ATI began offering embedded GPUs with the E2400 in 2008. Since that time AMD has released regular updates to their embedded GPU lineup in 2009, 2011, 2015, and 2016; reflecting improvements in their GPU technology.
Current product lines
CPU and APU products
AMD's portfolio of CPUs and APUs
Athlon – brand of entry level CPUs (Excavator) and APUs (Ryzen)
A-series – Excavator-class consumer desktop and laptop APUs
G-series – Excavator- and Jaguar-class low-power embedded APUs
Ryzen – brand of consumer CPUs and APUs
Ryzen Threadripper – brand of prosumer/professional CPUs
R-series – Excavator class high-performance embedded APUs
Epyc – brand of server CPUs
Opteron – brand of microserver APUs
Graphics products
AMD's portfolio of dedicated graphics processors
Radeon – brand for consumer line of graphics cards; the brand name originated with ATI.
Mobility Radeon offers power-optimized versions of Radeon graphics chips for use in laptops.
Radeon Pro – Workstation graphics card brand. Successor to the FirePro brand.
Radeon Instinct – brand of server and workstation targeted machine learning and GPGPU products
Radeon-branded products
RAM
In 2011, AMD began selling Radeon branded DDR3 SDRAM to support the higher bandwidth needs of AMD's APUs. While the RAM is sold by AMD, it was manufactured by Patriot Memory and VisionTek. This was later followed by higher speeds of gaming oriented DDR3 memory in 2013. Radeon branded DDR4 SDRAM memory was released in 2015, despite no AMD CPUs or APUs supporting DDR4 at the time. AMD noted in 2017 that these products are "mostly distributed in Eastern Europe" and that it continues to be active in the business.
Solid-state drives
AMD announced in 2014 it would sell Radeon branded solid-state drives manufactured by OCZ with capacities up to 480 GB and using the SATA interface.
Technologies
CPU hardware
technologies found in AMD CPU/APU and other products include:
HyperTransport – a high-bandwidth, low-latency system bus used in AMD's CPU and APU products
Infinity Fabric – a derivative of HyperTransport used as the communication bus in AMD's Zen microarchitecture
Graphics hardware
technologies found in AMD GPU products include:
AMD Eyefinity – facilitates multi-monitor setup of up to 6 monitors per graphics card
AMD FreeSync – display synchronization based on the VESA Adaptive Sync standard
AMD TrueAudio – acceleration of audio calculations
AMD XConnect – allows the use of External GPU enclosures through Thunderbolt 3
AMD CrossFire – multi-GPU technology allowing the simultaneous use of multiple GPUs
Unified Video Decoder (UVD) – acceleration of video decompression (decoding)
Video Coding Engine (VCE) – acceleration of video compression (encoding)
Software
AMD has made considerable efforts towards opening its software tools above the firmware level in the past decade.
For the following mentions, software not expressely stated free can be assumed to be proprietary.
Distribution
AMD Radeon Software is the default channel for official software distribution from AMD. It includes both free and proprietary software components, and supports both Microsoft Windows and Linux.
Software by type
CPU
AOCC is AMD's optimizing proprietary C/C++ compiler based on LLVM and available for Linux.
AMDuProf is AMD's CPU performance and Power profiling tool suite, available for Linux and Windows.
AMD has also taken an active part in developing coreboot, an open-source project aimed at replacing the proprietary BIOS firmware. This cooperation ceased in 2013, but AMD has indicated recently that it is considering releasing source code so that Ryzen can be compatible with coreboot in the future.
GPU
Most notable public AMD software is on the GPU side.
AMD has opened both its graphic and compute stacks:
GPUOpen is AMD's graphics stack, which includes for example FidelityFX Super Resolution.
ROCm (Radeon Open Compute platform) is AMD's compute stack for machine learning and high-performance computing, based on the LLVM compiler technologies. Under the ROCm project, AMDgpu is AMD's open source device driver supporting the GCN and following architectures, available for Linux. This latter driver component is used both by the graphics and compute stacks.
Misc
AMD conducts open research on heterogeneous computing.
Other AMD software includes the AMD Core Math Library, and open-source software including the AMD Performance Library.
AMD contributes to open source projects, including working with Sun Microsystems to enhance OpenSolaris and Sun xVM on the AMD platform. AMD also maintains its own Open64 compiler distribution and contributes its changes back to the community.
In 2008, AMD released the low-level programming specifications for its GPUs, and works with the X.Org Foundation to develop drivers for AMD graphics cards.
Extensions for software parallelism (xSP), aimed at speeding up programs to enable multi-threaded and multi-core processing, announced in Technology Analyst Day 2007. One of the initiatives being discussed since August 2007 is the Light Weight Profiling (LWP), providing internal hardware monitor with runtimes, to observe information about executing process and help the re-design of software to be optimized with multi-core and even multi-threaded programs. Another one is the extension of Streaming SIMD Extension (SSE) instruction set, the SSE5.
Codenamed SIMFIRE – interoperability testing tool for the Desktop and mobile Architecture for System Hardware (DASH) open architecture.
Production and fabrication
Previously, AMD produced its chips at company-owned semiconductor foundries. AMD pursued a strategy of collaboration with other semiconductor manufacturers IBM and Motorola to co-develop production technologies. AMD's founder Jerry Sanders termed this the "Virtual Gorilla" strategy to compete with Intel's significantly greater investments in fabrication.
In 2008, AMD spun off its chip foundries into an independent company named GlobalFoundries. This breakup of the company was attributed to the increasing costs of each process node. The Emirate of Abu Dhabi purchased the newly created company through its subsidiary Advanced Technology Investment Company (ATIC), purchasing the final stake from AMD in 2009.
With the spin-off of its foundries, AMD became a fabless semiconductor manufacturer, designing products to be produced at for-hire foundries. Part of the GlobalFoundries spin-off included an agreement with AMD to produce some number of products at GlobalFoundries. Both prior to the spin-off and after AMD has pursued production with other foundries including TSMC and Samsung. It has been argued that this would reduce risk for AMD by decreasing dependence on any one foundry which has caused issues in the past.
In 2018, AMD started shifting the production of their CPUs and GPUs to TSMC, following GlobalFoundries' announcement that they were halting development of their 7 nm process. AMD revised their wafer purchase requirement with GlobalFoundries in 2019, allowing AMD to freely choose foundries for 7 nm nodes and below, while maintaining purchase agreements for 12 nm and above through 2021.
Corporate affairs
Partnerships
AMD uses strategic industry partnerships to further its business interests as well as to rival Intel's dominance and resources:
A partnership between AMD and Alpha Processor Inc. developed HyperTransport, a point-to-point interconnect standard which was turned over to an industry standards body for finalization. It is now used in modern motherboards that are compatible with AMD processors.
AMD also formed a strategic partnership with IBM, under which AMD gained silicon on insulator (SOI) manufacturing technology, and detailed advice on 90 nm implementation. AMD announced that the partnership would extend to 2011 for 32 nm and 22 nm fabrication-related technologies.
To facilitate processor distribution and sales, AMD is loosely partnered with end-user companies, such as HP, Dell, Asus, Acer, and Microsoft.
In 1993, AMD established a 50–50 partnership with Fujitsu called FASL, and merged into a new company called FASL LLC in 2003. The joint venture went public under the name Spansion and ticker symbol SPSN in December 2005, with AMD shares dropping 37%. AMD no longer directly participates in the Flash memory devices market now as AMD entered into a non-competition agreement on December 21, 2005, with Fujitsu and Spansion, pursuant to which it agreed not to directly or indirectly engage in a business that manufactures or supplies standalone semiconductor devices (including single-chip, multiple-chip or system devices) containing only Flash memory.
On May 18, 2006, Dell announced that it would roll out new servers based on AMD's Opteron chips by year's end, thus ending an exclusive relationship with Intel. In September 2006, Dell began offering AMD Athlon X2 chips in their desktop lineup.
In June 2011, HP announced new business and consumer notebooks equipped with the latest versions of AMD APUsaccelerated processing units. AMD will power HP's Intel-based business notebooks as well.
In the spring of 2013, AMD announced that it would be powering all three major next-generation consoles. The Xbox One and Sony PlayStation 4 are both powered by a custom-built AMD APU, and the Nintendo Wii U is powered by an AMD GPU. According to AMD, having their processors in all three of these consoles will greatly assist developers with cross-platform development to competing consoles and PCs as well as increased support for their products across the board.
AMD has entered into an agreement with Hindustan Semiconductor Manufacturing Corporation (HSMC) for the production of AMD products in India.
AMD is a founding member of the HSA Foundation which aims to ease the use of a Heterogeneous System Architecture. A Heterogeneous System Architecture is intended to use both central processing units and graphics processors to complete computational tasks.
AMD announced in 2016 that it was creating a joint venture to produce x86 server chips for the Chinese market.
On May 7, 2019, it was reported that the U.S. Department of Energy, Oak Ridge National Laboratory, and Cray Inc., are working in collaboration with AMD to develop the Frontier exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 1.5 exaflops (peak double-precision) in computing performance. It is expected to debut sometime in 2021.
On March 5, 2020, it was announced that the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE are working in collaboration with AMD to develop the El Capitan exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 2 exaflops (peak double-precision) in computing performance. It is expected to debut in 2023.
In the summer of 2020, it was reported that AMD would be powering the next-generation console offerings from Microsoft and Sony.
On November 8, 2021, AMD announced a partnership with Meta to make the chips used in the Metaverse.
In January 2022, AMD partnered with Samsung to develop a mobile processor to be used in future products. The processor was named Exynos 2022 and works with the AMD RDNA 2 architecture.
Litigation with Intel
AMD has a long history of litigation with former (and current) partner and x86 creator Intel.
In 1986, Intel broke an agreement it had with AMD to allow them to produce Intel's microchips for IBM; AMD filed for arbitration in 1987 and the arbitrator decided in AMD's favor in 1992. Intel disputed this, and the case ended up in the Supreme Court of California. In 1994, that court upheld the arbitrator's decision and awarded damages for breach of contract.
In 1990, Intel brought a copyright infringement action alleging illegal use of its 287 microcode. The case ended in 1994 with a jury finding for AMD and its right to use Intel's microcode in its microprocessors through the 486 generation.
In 1997, Intel filed suit against AMD and Cyrix Corp. for misuse of the term MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to market the AMD K6 MMX processor.
In 2005, following an investigation, the Japan Federal Trade Commission found Intel guilty of a number of violations. On June 27, 2005, AMD won an antitrust suit against Intel in Japan, and on the same day, AMD filed a broad antitrust complaint against Intel in the U.S. Federal District Court in Delaware. The complaint alleges systematic use of secret rebates, special discounts, threats, and other means used by Intel to lock AMD processors out of the global market. Since the start of this action, the court has issued subpoenas to major computer manufacturers including Acer, Dell, Lenovo, HP and Toshiba.
In November 2009, Intel agreed to pay AMD $1.25bn and renew a five-year patent cross-licensing agreement as part of a deal to settle all outstanding legal disputes between them.
Guinness World Record achievement
On August 31, 2011, in Austin, Texas, AMD achieved a Guinness World Record for the "Highest frequency of a computer processor": 8.429 GHz. The company ran an 8-core FX-8150 processor with only one active module (two cores), and cooled with liquid helium. The previous record was 8.308 GHz, with an Intel Celeron 352 (one core).
On November 1, 2011, geek.com reported that Andre Yang, an overclocker from Taiwan, used an FX-8150 to set another record: 8.461 GHz.
On November 19, 2012, Andre Yang used an FX-8350 to set another record: 8.794 GHz.
Acquisitions, mergers and investments
Corporate social responsibility
In its 2012 report on progress relating to conflict minerals, the Enough Project rated AMD the fifth most progressive of 24 consumer electronics companies.
Other initiatives
50x15, digital inclusion, with targeted 50% of world population to be connected through Internet via affordable computers by the year of 2015.
The Green Grid, founded by AMD together with other founders, such as IBM, Sun and Microsoft, to seek lower power consumption for grids.
See also
Bill Gaede
List of AMD processors
List of AMD accelerated processing units
List of AMD graphics processing units
List of AMD chipsets
List of ATI chipsets
3DNow!
Cool'n'Quiet
PowerNow!
Notes
References
Rodengen, Jeffrey L. The Spirit of AMD: Advanced Micro Devices. Write Stuff, 1998.
Ruiz, Hector. Slingshot: AMD's Fight to Free an Industry from the Ruthless Grip of Intel. Greenleaf Book Group, 2013.
External links
1969 establishments in California
1970s initial public offerings
American companies established in 1969
Fabless semiconductor companies
Companies based in Santa Clara, California
Companies formerly listed on the New York Stock Exchange
Companies listed on the Nasdaq
Companies in the Nasdaq-100
Computer companies of the United States
Computer companies established in 1969
Electronics companies established in 1969
Graphics hardware companies
HSA Foundation founding members
Manufacturing companies based in the San Francisco Bay Area
Motherboard companies
Semiconductor companies of the United States
Superfund sites in California
Technology companies based in the San Francisco Bay Area
Technology companies established in 1969
|
https://en.wikipedia.org/wiki/Acceleration
|
In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes:
the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force;
that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass.
The SI unit for acceleration is metre per second squared (, ).
For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralized in reference to the acceleration due to change in speed.
Definition and properties
Average acceleration
An object's average acceleration over a period of time is its change in velocity, , divided by the duration of the period, . Mathematically,
Instantaneous acceleration
Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time:
As acceleration is defined as the derivative of velocity, , with respect to time and velocity is defined as the derivative of position, , with respect to time, acceleration can be thought of as the second derivative of with respect to :
(Here and elsewhere, if motion is in a straight line, vector quantities can be substituted by scalars in the equations.)
By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function is the velocity function ; that is, the area under the curve of an acceleration vs. time ( vs. ) graph corresponds to the change of velocity.
Likewise, the integral of the jerk function , the derivative of the acceleration function, can be used to find the change of acceleration at a certain time:
Units
Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second.
Other forms
An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration.
Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer.
In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law):
where is the net force acting on the body, is the mass of the body, and is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large.
Tangential and centripetal acceleration
The velocity of a particle moving on a curved path as a function of time can be written as:
with equal to the speed of travel along the path, and
a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed and the changing direction of , the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as:
where is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and is its instantaneous radius of curvature based upon the osculating circle at time . The components
are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force), respectively.
Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas.
Special cases
Uniform acceleration
Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period.
A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength (also called acceleration due to gravity). By Newton's Second Law the force acting on a body is given by:
Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed:
where
is the elapsed time,
is the initial displacement from the origin,
is the displacement from the origin at time ,
is the initial velocity,
is the velocity at time , and
is the uniform rate of acceleration.
In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth.
Circular motion
In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighboring point, thereby rotating the velocity vector along the circle.
For a given speed , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius of the circle, and increases as the square of this speed:
For a given angular velocity , the centripetal acceleration is directly proportional to radius . This is due to the dependence of velocity on the radius .
Expressing centripetal acceleration vector in polar components, where is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields
As usual in rotations, the speed of a particle may be expressed as an angular speed with respect to a point at the distance as
Thus
This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion.
In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius for the centripetal acceleration. The tangential component is given by the angular acceleration , i.e., the rate of change of the angular speed times the radius . That is,
The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration (), and the tangent is always directed at right angles to the radius vector.
Relation to relativity
Special relativity
The special theory of relativity describes the behavior of objects traveling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations.
As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it.
General relativity
Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating.
Conversions
See also
Acceleration (differential geometry)
Four-vector: making the connection between space and time explicit
Gravitational acceleration
Inertia
Orders of magnitude (acceleration)
Shock (mechanics)
Shock and vibration data loggermeasuring 3-axis acceleration
Space travel using constant acceleration
Specific force
References
External links
Acceleration Calculator Simple acceleration unit converter
Acceleration Calculator Acceleration Conversion calculator converts units form meter per second square, kilometer per second square, millimeter per second square & more with metric conversion.
Dynamics (mechanics)
Kinematic properties
Vector physical quantities
|
https://en.wikipedia.org/wiki/Apoptosis
|
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses between 50 and 70 billion cells each day due to apoptosis. For an average human child between eight and fourteen years old, each day the approximate lost is 20 to 30 billion cells.
In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them.
Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately.
In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis.
Discovery and etymology
German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz.
For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of the first component of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. This occurred in 1988 when it was shown that BCL2, the gene responsible for follicular lymphoma, encoded a protein that inhibited cell death.
The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis.
In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ) and the second p pronounced (). In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc.
In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation:
We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid.
Activation mechanisms
The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC).
A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell suicide. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis.
Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain.
Intrinsic pathway
The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis.
During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor – 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3.
Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability.
Extrinsic pathway
Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals.
TNF pathway
TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the protein TRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis.
Fas pathway
The fas receptor (First apoptosis signal) – (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8.
Common components
Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family.
Caspases
Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases, caspase 2,8,9,10,11,12, and effector caspases, caspase 3,6,7. The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program.
Caspase-independent apoptotic pathway
There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor).
Apoptosis model in amphibians
The frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibian's metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog.
Negative regulators of apoptosis
Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB.
Proteolytic caspase cascade: Killing the cell
Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis.
A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include:
Cell shrinkage and rounding occur because of the retraction lamellipodia and the breakdown of the proteinaceous cytoskeleton by caspases.
The cytoplasm appears dense, and the organelles appear tightly packed.
Chromatin undergoes condensation into compact patches against the nuclear envelope (also known as the perinuclear envelope) in a process known as pyknosis, a hallmark of apoptosis.
The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA.
Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death.
Apoptotic cell disassembly
Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly:
Membrane blebbing: The cell membrane shows irregular buds known as blebs. Initially these are smaller surface blebs. Later these can grow into larger so-called dynamic membrane blebs. An important regulator of apoptotic cell membrane blebbing is ROCK1 (rho associated coiled-coil-containing protein kinase 1).
Formation of membrane protrusions: Some cell types, under specific conditions, may develop different types of long, thin extensions of the cell membrane called membrane protrusions. Three types have been described: microtubule spikes, apoptopodia (feet of death), and beaded apoptopodia (the latter having a beads-on-a-string appearance). Pannexin 1 is an important component of membrane channels involved in the formation of apoptopodia and beaded apoptopodia.
Fragmentation: The cell breaks apart into multiple vesicles called apoptotic bodies, which undergo phagocytosis. The plasma membrane protrusions may help bring apoptotic bodies closer to phagocytes.
Removal of dead cells
The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis.
Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation.
Pathway knock-outs
Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons.
The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation . A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality . However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure . These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation . A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist.
Methods for distinguishing apoptotic from necrotic cells
Label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy can be used to compare apoptotic and necrotic cells. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). Supernatant screening for caspases, HMGB1, and cytokeratin 18 release can identify primary from secondary necrotic cells. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references.
Implication in disease
Defective pathways
The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased.
A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis.
Dysregulation of p53
The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair, however it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors.
Inhibition
Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis".
HeLa cell
Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur.
Treatments
The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway.
Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis.
Hyperactive apoptosis
On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the "Inverse Warburg hypothesis"). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated.
At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM.
Treatments
Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKKα, which leads to NF-κB activation and cell survival. Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type.
HIV progression
The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways:
HIV enzymes deactivate anti-apoptotic Bcl-2. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. In parallel, these enzymes activate proapoptotic procaspase-8, which does directly activate the mitochondrial events of apoptosis.
HIV may increase the level of cellular proteins that prompt Fas-mediated apoptosis.
HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane.
Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T helper cells.
HIV decreases the production of molecules involved in marking the cell for apoptosis, giving the virus time to replicate and continue releasing apoptotic agents and virions into the surrounding tissue.
The infected CD4+ cell may also receive the death signal from a cytotoxic T cell.
Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200.
Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV.
Viral infection
Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells.
Viruses can trigger apoptosis of infected cells via a range of mechanisms including:
Receptor binding
Activation of protein kinase R (PKR)
Interaction with p53
Expression of viral proteins coupled to MHC proteins on the surface of the infected cell, allowing recognition by cells of the immune system (such as Natural Killer and cytotoxic T cells) that then induce the infected cell to undergo apoptosis.
Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro.
Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade.
The Oropouche virus (OROV) is found in the family Bunyaviridae. The study of apoptosis brought on by Bunyaviridae was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice.
OROV is a disease that is transmitted between humans by the biting midge (Culicoides paraensis). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever.
The Oropouche virus also causes disruption in cultured cells – cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected.
With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway.
In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria.
Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function.
Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. Prions can cause apoptosis in neurons.
Plants
Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Additionally, plants do not contain phagocytic cells, which are essential in the process of breaking down and removing apoptotic bodies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name apoptosis (as opposed to the more general programmed cell death) is unclear.
Caspase-independent apoptosis
The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease.
See also
Anoikis
Apaf-1
Apo2.7
Apoptotic DNA fragmentation
Atromentin induces apoptosis in human leukemia U937 cells.
Autolysis
Autophagy
Cisplatin
Cytotoxicity
Entosis
Ferroptosis
Homeostasis
Immunology
Necrobiosis
Necrosis
Necrotaxis
Nemosis
Mitotic catastrophe
p53
Paraptosis
Pseudoapoptosis
PI3K/AKT/mTOR pathway
Explanatory footnotes
Citations
General bibliography
External links
Apoptosis & cell surface
Apoptosis & Caspase 3, The Proteolysis Map – animation
Apoptosis & Caspase 8, The Proteolysis Map – animation
Apoptosis & Caspase 7, The Proteolysis Map – animation
Apoptosis MiniCOPE Dictionary – list of apoptosis terms and acronyms
Apoptosis (Programmed Cell Death) – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Apoptosis Research Portal
Apoptosis Info Apoptosis protocols, articles, news, and recent publications.
Database of proteins involved in apoptosis
Apoptosis Video
Apoptosis Video (WEHI on YouTube )
The Mechanisms of Apoptosis Kimball's Biology Pages. Simple explanation of the mechanisms of apoptosis triggered by internal signals (bcl-2), along the caspase-9, caspase-3 and caspase-7 pathway; and by external signals (FAS and TNF), along the caspase 8 pathway. Accessed 25 March 2007.
WikiPathways – Apoptosis pathway
"Finding Cancer's Self-Destruct Button". CR magazine (Spring 2007). Article on apoptosis and cancer.
Xiaodong Wang's lecture: Introduction to Apoptosis
Robert Horvitz's Short Clip: Discovering Programmed Cell Death
The Bcl-2 Database
DeathBase: a database of proteins involved in cell death, curated by experts
European Cell Death Organization
Apoptosis signaling pathway created by Cusabio
Cell signaling
Cellular senescence
Immunology
Medical aspects of death
Programmed cell death
|
https://en.wikipedia.org/wiki/Anus
|
The anus (: anuses or ani; from Latin, 'ring' or 'circle') is an opening at the opposite end of an animal's digestive tract from the mouth. Its function is to control the expulsion of feces, the residual semi-solid waste that remains after food digestion, which, depending on the type of animal, includes: matter which the animal cannot digest, such as bones; food material after the nutrients have been extracted, for example cellulose or lignin; ingested matter which would be toxic if it remained in the digestive tract; and dead or excess gut bacteria and other endosymbionts.
Amphibians, reptiles, and birds use the same orifice (known as the cloaca) for excreting liquid and solid wastes, for copulation and egg-laying. Monotreme mammals also have a cloaca, which is thought to be a feature inherited from the earliest amniotes via the therapsids. Marsupials have a single orifice for excreting both solids and liquids and, in females, a separate vagina for reproduction. Female placental mammals have completely separate orifices for defecation, urination, and reproduction; males have one opening for defecation and another for both urination and reproduction, although the channels flowing to that orifice are almost completely separate.
The development of the anus was an important stage in the evolution of multicellular animals. It appears to have happened at least twice, following different paths in protostomes and deuterostomes. This accompanied or facilitated other important evolutionary developments: the bilaterian body plan, the coelom, and metamerism, in which the body was built of repeated "modules" which could later specialize, such as the heads of most arthropods, which are composed of fused, specialized segments.
In comb jellies there are species with one and sometimes two permanent anuses, species like the warty comb jelly grows an anus which then disappear when it's no longer needed.
Development
In animals at least as complex as an earthworm, the embryo forms a dent on one side, the blastopore, which deepens to become the archenteron, the first phase in the growth of the gut. In deuterostomes, the original dent becomes the anus while the gut eventually tunnels through to make another opening, which forms the mouth. The protostomes were so named because it was thought that in their embryos the dent formed the mouth first (proto– meaning "first") and the anus was formed later at the opening made by the other end of the gut. Research from 2001 shows the edges of the dent close up in the middles of protosomes, leaving openings at the ends which become the mouths and anuses.
See also
References
External links
Digestive system
|
https://en.wikipedia.org/wiki/Amphetamine
|
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity. Amphetamine was discovered as a chemical in 1887 by Lazăr Edeleanu, and then as a drug in the late 1920s. It exists as two enantiomers: levoamphetamine and dextroamphetamine. Amphetamine properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use.
The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Currently, pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems.
At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects.
Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group.
Uses
Medical
Amphetamine is used to treat attention deficit hyperactivity disorder (ADHD), narcolepsy (a sleep disorder), and obesity, and is sometimes prescribed for its past medical indications, particularly for depression and chronic pain.
Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, long-term use of pharmaceutical amphetamines at therapeutic doses appears to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult.
Current models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Psychostimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals.
Enhancing performance
Cognitive performance
In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine receptor D1 and adrenoceptor α2 in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control.
Physical performance
Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature.
Recreational
Amphetamine, specifically the more dopaminergic dextrorotatory enantiomer (dextroamphetamine), is also used recreationally as a euphoriant and aphrodisiac, and like other amphetamines; is used as a club drug for its energetic and euphoric high. Dextroamphetamine (d-amphetamine) is considered to have a high potential for misuse in a recreational manner since individuals typically report feeling euphoric, more alert, and more energetic after taking the drug. A notable part of the 1960s mod subculture in the UK was recreational amphetamine use, which was used to fuel all-night dances at clubs like Manchester's Twisted Wheel. Newspaper reports described dancers emerging from clubs at 5 a.m. with dilated pupils. Mods used the drug for stimulation and alertness, which they viewed as different from the intoxication caused by alcohol and other drugs. Dr. Andrew Wilson argues that for a significant minority, "amphetamines symbolised the smart, on-the-ball, cool image" and that they sought "stimulation not intoxication [...] greater awareness, not escape" and "confidence and articulacy" rather than the "drunken rowdiness of previous generations." Dextroamphetamine's dopaminergic (rewarding) properties affect the mesocorticolimbic circuit; a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), positive reinforcement and positively-valenced emotions, particularly ones involving pleasure. Large recreational doses of dextroamphetamine may produce symptoms of dextroamphetamine overdose. Recreational users sometimes open dexedrine capsules and crush the contents in order to insufflate (snort) it or subsequently dissolve it in water and inject it. Immediate-release formulations have higher potential for abuse via insufflation (snorting) or intravenous injection due to a more favorable pharmacokinetic profile and easy crushability (especially tablets). Injection into the bloodstream can be dangerous because insoluble fillers within the tablets can block small blood vessels. Chronic overuse of dextroamphetamine can lead to severe drug dependence, resulting in withdrawal symptoms when drug use stops.
Contraindications
According to the International Programme on Chemical Safety (IPCS) and the United States Food and Drug Administration (USFDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the USFDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the USFDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical.
Adverse effects
The adverse side effects of amphetamine are many and varied, and the amount of amphetamine used is the primary factor in determining the likelihood and severity of adverse effects. Amphetamine products such as Adderall, Dexedrine, and their generic equivalents are currently approved by the USFDA for long-term therapeutic use. Recreational use of amphetamine generally involves much larger doses, which have a greater risk of serious adverse drug effects than dosages used for therapeutic purposes.
Physical
Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses.
Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids.
USFDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease.
Psychological
At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the USFDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility.
Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine.
Reinforcement disorders
Addiction
Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increases the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction.
Biomolecular mechanisms
Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are Delta FBJ murine osteosarcoma viral oncogene homolog B (ΔFosB), cAMP response element binding protein (CREB), and nuclear factor-kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others.
ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs.
The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation.
Pharmacological treatments
there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction.
A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil.
Behavioral treatments
A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these.
Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system.
Dependence and withdrawal
Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect.
According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose.
Overdose
An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "amphetamine use disorder" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence).
Toxicity
In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability.
Psychosis
An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use.
Drug interactions
Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD.
In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic).
Pharmacology
Pharmacodynamics
Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum.
Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons.
In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival in vitro. The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity.
The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it affects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain.
Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects.
Dopamine
In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state.
Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at .
Norepinephrine
Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from .
Serotonin
Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor.
Other neurotransmitters, peptides, hormones, and enzymes
Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis.
In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma.
Pharmacokinetics
The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically over 75% for dextroamphetamine. Amphetamine is a weak base with a pKa of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue.
The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are hours and hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose.
CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine N-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following:
Pharmacomicrobiomics
The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics.
Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of E. coli commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds.
Related endogenous compounds
Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , an isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine N-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine.
Chemistry
Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of .
Substituted derivatives
The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups.
Synthesis
Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt.
A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine.
A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4).
A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6).
Detection in body fluids
Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for days.
For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug.
History, society, and culture
Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes.
Amphetamine is still illegally synthesized today in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA.
Legal status
As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment.
Pharmaceutical products
Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as "amphetamine". Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below.
Notes
Image legend
Reference notes
References
External links
– Dextroamphetamine
– Levoamphetamine
Comparative Toxicogenomics Database entry: Amphetamine
Comparative Toxicogenomics Database entry: CARTPT
5-HT1A agonists
Anorectics
Aphrodisiacs
Carbonic anhydrase activators
Drugs acting on the cardiovascular system
Drugs acting on the nervous system
Drugs in sport
Ergogenic aids
Euphoriants
Excitatory amino acid reuptake inhibitors
German inventions
Management of obesity
Narcolepsy
Nootropics
Norepinephrine-dopamine releasing agents
Phenethylamines
Stimulants
Substituted amphetamines
TAAR1 agonists
Attention deficit hyperactivity disorder management
VMAT inhibitors
World Anti-Doping Agency prohibited substances
|
https://en.wikipedia.org/wiki/Agarose
|
Agarose is a heteropolysaccharide, generally extracted from certain red seaweed. It is a linear polymer made up of the repeating unit of agarobiose, which is a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agarose is one of the two principal components of agar, and is purified from agar by removing agar's other component, agaropectin.
Agarose is frequently used in molecular biology for the separation of large molecules, especially DNA, by electrophoresis. Slabs of agarose gels (usually 0.7 - 2%) for electrophoresis are readily prepared by pouring the warm, liquid solution into a mold. A wide range of different agaroses of varying molecular weights and properties are commercially available for this purpose. Agarose may also be formed into beads and used in a number of chromatographic methods for protein purification.
Structure
Agarose is a linear polymer with a molecular weight of about 120,000, consisting of alternating D-galactose and 3,6-anhydro-L-galactopyranose linked by α-(1→3) and β-(1→4) glycosidic bonds. The 3,6-anhydro-L-galactopyranose is an L-galactose with an anhydro bridge between the 3 and 6 positions, although some L-galactose units in the polymer may not contain the bridge. Some D-galactose and L-galactose units can be methylated, and pyruvate and sulfate are also found in small quantities.
Each agarose chain contains ~800 molecules of galactose, and the agarose polymer chains form helical fibers that aggregate into supercoiled structure with a radius of 20-30 nanometer (nm). The fibers are quasi-rigid, and have a wide range of length depending on the agarose concentration. When solidified, the fibers form a three-dimensional mesh of channels of diameter ranging from 50 nm to >200 nm depending on the concentration of agarose used - higher concentrations yield lower average pore diameters. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state.
Properties
Agarose is available as a white powder which dissolves in near-boiling water, and forms a gel when it cools. Agarose exhibits the phenomenon of thermal hysteresis in its liquid-to-gel transition, i.e. it gels and melts at different temperatures. The gelling and melting temperatures vary depending on the type of agarose. Standard agaroses derived from Gelidium has a gelling temperature of and a melting temperature of , while those derived from Gracilaria, due to its higher methoxy substituents, has a gelling temperature of and melting temperature of . The melting and gelling temperatures may be dependent on the concentration of the gel, particularly at low gel concentration of less than 1%. The gelling and melting temperatures are therefore given at a specified agarose concentration.
Natural agarose contains uncharged methyl groups and the extent of methylation is directly proportional to the gelling temperature. Synthetic methylation however have the reverse effect, whereby increased methylation lowers the gelling temperature. A variety of chemically modified agaroses with different melting and gelling temperatures are available through chemical modifications.
The agarose in the gel forms a meshwork that contains pores, and the size of the pores depends on the concentration of agarose added. On standing, the agarose gels are prone to syneresis (extrusion of water through the gel surface), but the process is slow enough to not interfere with the use of the gel.
Agarose gel can have high gel strength at low concentration, making it suitable as an anti-convection medium for gel electrophoresis. Agarose gels as dilute as 0.15% can form slabs for gel electrophoresis. The agarose polymer contains charged groups, in particular pyruvate and sulfate. These negatively charged groups can slow down the movement of DNA molecules in a process called electroendosmosis (EEO), and low EEO agarose is therefore generally preferred for use in agarose gel electrophoresis of nucleic acids. Zero EEO agaroses are also available but these may be undesirable for some applications as they may be made by adding positively charged groups that can affect subsequent enzyme reactions. Electroendosmosis is a reason agarose is used preferentially over agar as agaropectin in agar contains a significant amount of negatively charged sulphate and carboxyl groups. The removal of agaropectin in agarose substantially reduces the EEO, as well as reducing the non-specific adsorption of biomolecules to the gel matrix. However, for some applications such as the electrophoresis of serum protein, a high EEO may be desirable, and agaropectin may be added in the gel used.
Low melting and gelling temperature agaroses
The melting and gelling temperatures of agarose can be modified by chemical modifications, most commonly by hydroxyethylation, which reduces the number of intrastrand hydrogen bonds, resulting in lower melting and setting temperatures compared to standard agaroses. The exact temperature is determined by the degree of substitution, and many available low-melting-point (LMP) agaroses can remain fluid at range. This property allows enzymatic manipulations to be carried out directly after the DNA gel electrophoresis by adding slices of melted gel containing DNA fragment of interest to a reaction mixture. The LMP agarose contains fewer of the sulphates that can affect some enzymatic reactions, and is therefore preferably used for some applications.
Hydroxyethylated agarose also has a smaller pore size (~90 nm) than standard agaroses. Hydroxyethylation may reduce the pore size by reducing the packing density of the agarose bundles, therefore LMP gel can also have an effect on the time and separation during electrophoresis. Ultra-low melting or gelling temperature agaroses may gel only at .
Applications
Agarose is a preferred matrix for work with proteins and nucleic acids as it has a broad range of physical, chemical and thermal stability, and its lower degree of chemical complexity also makes it less likely to interact with biomolecules. Agarose is most commonly used as the medium for analytical scale electrophoretic separation in agarose gel electrophoresis. Gels made from purified agarose have a relatively large pore size, making them useful for separation of large molecules, such as proteins and protein complexes >200 kilodaltons, as well as DNA fragments >100 basepairs. Agarose is also used widely for a number of other applications, for example immunodiffusion and immunoelectrophoresis, as the agarose fibers can function as anchor for immunocomplexes.
Agarose gel electrophoresis
Agarose gel electrophoresis is the routine method for resolving DNA in the laboratory. Agarose gels have lower resolving power for DNA than acrylamide gels, but they have greater range of separation, and are therefore usually used for DNA fragments with lengths of 50–20,000 bp (base pairs), although resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). It can also be used to separate large protein molecules, and it is the preferred matrix for the gel electrophoresis of particles with effective radii larger than 5-10 nm.
The pore size of the gel affects the size of the DNA that can be sieved. The lower the concentration of the gel, the larger the pore size, and the larger the DNA that can be sieved. However low-concentration gels (0.1 - 0.2%) are fragile and therefore hard to handle, and the electrophoresis of large DNA molecules can take several days. The limit of resolution for standard agarose gel electrophoresis is around 750 kb. This limit can be overcome by PFGE, where alternating orthogonal electric fields are applied to the gel. The DNA fragments reorientate themselves when the applied field switches direction, but larger molecules of DNA take longer to realign themselves when the electric field is altered, while for smaller ones it is quicker, and the DNA can therefore be fractionated according to size.
Agarose gels are cast in a mold, and when set, usually run horizontally submerged in a buffer solution. Tris-acetate-EDTA and Tris-Borate-EDTA buffers are commonly used, but other buffers such as Tris-phosphate, barbituric acid-sodium barbiturate or Tris-barbiturate buffers may be used in other applications. The DNA is normally visualized by staining with ethidium bromide and then viewed under a UV light, but other methods of staining are available, such as SYBR Green, GelRed, methylene blue, and crystal violet. If the separated DNA fragments are needed for further downstream experiment, they can be cut out from the gel in slices for further manipulation.
Protein purification
Agarose gel matrix is often used for protein purification, for example, in column-based preparative scale separation as in gel filtration chromatography, affinity chromatography and ion exchange chromatography. It is however not used as a continuous gel, rather it is formed into porous beads or resins of varying fineness. The beads are highly porous so that protein may flow freely through the beads. These agarose-based beads are generally soft and easily crushed, so they should be used under gravity-flow, low-speed centrifugation, or low-pressure procedures. The strength of the resins can be improved by increased cross-linking and chemical hardening of the agarose resins, however such changes may also result in a lower binding capacity for protein in some separation procedures such as affinity chromatography.
Agarose is a useful material for chromatography because it does not absorb biomolecules to any significant extent, has good flow properties, and can tolerate extremes of pH and ionic strength as well as high concentration of denaturants such as 8M urea or 6M guanidine HCl. Examples of agarose-based matrix for gel filtration chromatography are Sepharose and WorkBeads 40 SEC (cross-linked beaded agarose), Praesto and Superose (highly cross-linked beaded agaroses), and Superdex (dextran covalently linked to agarose).
For affinity chromatography, beaded agarose is the most commonly used matrix resin for the attachment of the ligands that bind protein. The ligands are linked covalently through a spacer to activated hydroxyl groups of agarose bead polymer. Proteins of interest can then be selectively bound to the ligands to separate them from other proteins, after which it can be eluted. The agarose beads used are typically of 4% and 6% densities with a high binding capacity for protein.
Solid culture media
Agarose plate may sometimes be used instead of agar for culturing organisms as agar may contain impurities that can affect the growth of the organism or some downstream procedures such as polymerase chain reaction (PCR). Agarose is also harder than agar and may therefore be preferable where greater gel strength is necessary, and its lower gelling temperature may prevent causing thermal shock to the organism when the cells are suspended in liquid before gelling. It may be used for the culture of strict autotrophic bacteria, plant protoplast, Caenorhabditis elegans, other organisms and various cell lines.
Motility assays
Agarose is sometimes used instead of agar to measure microorganism motility and mobility. Motile species will be able to migrate, albeit slowly, throughout the porous gel and infiltration rates can then be visualized. The gel's porosity is directly related to the concentration of agar or agarose in the medium, so different concentration gels may be used to assess a cell's swimming, swarming, gliding and twitching motility. Under-agarose cell migration assay may be used to measure chemotaxis and chemokinesis. A layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient.
See also
Agar
SDD-AGE
References
Polysaccharides
|
https://en.wikipedia.org/wiki/Autocorrelation
|
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.
Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.
Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation.
Auto-correlation of stochastic processes
In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let be a random process, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Suppose that the process has mean and variance at time , for each . Then the definition of the auto-correlation function between times and is
where is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined.
Subtracting the mean before multiplication yields the auto-covariance function between times and :
Note that this expression is not well defined for all time series or processes, because the mean may not exist, or the variance may be zero (for a constant process) or infinite (for processes with distribution lacking well-behaved moments, such as certain types of power law).
Definition for wide-sense stationary stochastic process
If is a wide-sense stationary process then the mean and the variance are time-independent, and further the autocovariance function depends only on the lag between and : the autocovariance depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocovariance and auto-correlation can be expressed as a function of the time-lag, and that this would be an even function of the lag . This gives the more familiar forms for the auto-correlation function
and the auto-covariance function:
In particular, note that
Normalization
It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.
The definition of the auto-correlation coefficient of a stochastic process is
If the function is well defined, its value must lie in the range , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.
For a wide-sense stationary (WSS) process, the definition is
.
The normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations.
Properties
Symmetry property
The fact that the auto-correlation function is an even function can be stated as
respectively for a WSS process:
Maximum at zero
For a WSS process:
Notice that is always real.
Cauchy–Schwarz inequality
The Cauchy–Schwarz inequality, inequality for stochastic processes:
Autocorrelation of white noise
The autocorrelation of a continuous-time white noise signal will have a strong peak (represented by a Dirac delta function) at and will be exactly for all other .
Wiener–Khinchin theorem
The Wiener–Khinchin theorem relates the autocorrelation function to the power spectral density via the Fourier transform:
For real-valued functions, the symmetric autocorrelation function has a real symmetric transform, so the Wiener–Khinchin theorem can be re-expressed in terms of real cosines only:
Auto-correlation of random vectors
The (potentially time-dependent) auto-correlation matrix (also called second moment) of a (potentially time-dependent) random vector is an matrix containing as elements the autocorrelations of all pairs of elements of the random vector . The autocorrelation matrix is used in various digital signal processing algorithms.
For a random vector containing random elements whose expected value and variance exist, the auto-correlation matrix is defined by
where denotes the transposed matrix of dimensions .
Written component-wise:
If is a complex random vector, the autocorrelation matrix is instead defined by
Here denotes Hermitian transpose.
For example, if is a random vector, then is a matrix whose -th entry is .
Properties of the autocorrelation matrix
The autocorrelation matrix is a Hermitian matrix for complex random vectors and a symmetric matrix for real random vectors.
The autocorrelation matrix is a positive semidefinite matrix, i.e. for a real random vector, and respectively in case of a complex random vector.
All eigenvalues of the autocorrelation matrix are real and non-negative.
The auto-covariance matrix is related to the autocorrelation matrix as follows:Respectively for complex random vectors:
Auto-correlation of deterministic signals
In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient or autocovariance function.
Auto-correlation of continuous-time signal
Given a signal , the continuous autocorrelation is most often defined as the continuous cross-correlation integral of with itself, at lag .
where represents the complex conjugate of . Note that the parameter in the integral is a dummy variable and is only necessary to calculate the integral. It has no specific meaning.
Auto-correlation of discrete-time signal
The discrete autocorrelation at lag for a discrete-time signal is
The above definitions work for signals that are square integrable, or square summable, that is, of finite energy. Signals that "last forever" are treated instead as random processes, in which case different definitions are needed, based on expected values. For wide-sense-stationary random processes, the autocorrelations are defined as
For processes that are not stationary, these will also be functions of , or .
For processes that are also ergodic, the expectation can be replaced by the limit of a time average. The autocorrelation of an ergodic process is sometimes defined as or equated to
These definitions have the advantage that they give sensible well-defined single-parameter results for periodic functions, even when those functions are not the output of stationary ergodic processes.
Alternatively, signals that last forever can be treated by a short-time autocorrelation function analysis, using finite time integrals. (See short-time Fourier transform for a related process.)
Definition for periodic signals
If is a continuous periodic function of period , the integration from to is replaced by integration over any interval of length :
which is equivalent to
Properties
In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transferred from the one-dimensional case to the multi-dimensional cases. These properties hold for wide-sense stationary processes.
A fundamental property of the autocorrelation is symmetry, , which is easy to prove from the definition. In the continuous case,
the autocorrelation is an even function when is a real function, and
the autocorrelation is a Hermitian function when is a complex function.
The continuous autocorrelation function reaches its peak at the origin, where it takes a real value, i.e. for any delay , . This is a consequence of the rearrangement inequality. The same result holds in the discrete case.
The autocorrelation of a periodic function is, itself, periodic with the same period.
The autocorrelation of the sum of two completely uncorrelated functions (the cross-correlation is zero for all ) is the sum of the autocorrelations of each function separately.
Since autocorrelation is a specific type of cross-correlation, it maintains all the properties of cross-correlation.
By using the symbol to represent convolution and is a function which manipulates the function and is defined as , the definition for may be written as:
Multi-dimensional autocorrelation
Multi-dimensional autocorrelation is defined similarly. For example, in three dimensions the autocorrelation of a square-summable discrete signal would be
When mean values are subtracted from signals before computing an autocorrelation function, the resulting function is usually called an auto-covariance function.
Efficient computation
For data expressed as a discrete sequence, it is frequently necessary to compute the autocorrelation with high computational efficiency. A brute force method based on the signal processing definition can be used when the signal size is small. For example, to calculate the autocorrelation of the real signal sequence (i.e. , and for all other values of ) by hand, we first recognize that the definition just given is the same as the "usual" multiplication, but with right shifts, where each vertical addition gives the autocorrelation for particular lag values:
Thus the required autocorrelation sequence is , where and the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation. If the signal happens to be periodic, i.e. then we get a circular autocorrelation (similar to circular convolution) where the left and right tails of the previous autocorrelation sequence will overlap and give which has the same period as the signal sequence The procedure can be regarded as an application of the convolution property of Z-transform of a discrete signal.
While the brute force algorithm is order , several efficient algorithms exist which can compute the autocorrelation in order . For example, the Wiener–Khinchin theorem allows computing the autocorrelation from the raw data with two fast Fourier transforms (FFT):
where IFFT denotes the inverse fast Fourier transform. The asterisk denotes complex conjugate.
Alternatively, a multiple correlation can be performed by using brute force calculation for low values, and then progressively binning the data with a logarithmic density to compute higher values, resulting in the same efficiency, but with lower memory requirements.
Estimation
For a discrete process with known mean and variance for which we observe observations , an estimate of the autocorrelation coefficient may be obtained as
for any positive integer . When the true mean and variance are known, this estimate is unbiased. If the true mean and variance of the process are not known there are several possibilities:
If and are replaced by the standard formulae for sample mean and sample variance, then this is a biased estimate.
A periodogram-based estimate replaces in the above formula with . This estimate is always biased; however, it usually has a smaller mean squared error.
Other possibilities derive from treating the two portions of data and separately and calculating separate sample means and/or sample variances for use in defining the estimate.
The advantage of estimates of the last type is that the set of estimated autocorrelations, as a function of , then form a function which is a valid autocorrelation in the sense that it is possible to define a theoretical process having exactly that autocorrelation. Other estimates can suffer from the problem that, if they are used to calculate the variance of a linear combination of the 's, the variance calculated may turn out to be negative.
Regression analysis
In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used.
In ordinary least squares (OLS), the adequacy of a model specification can be checked in part by establishing whether there is autocorrelation of the regression residuals. Problematic autocorrelation of the errors, which themselves are unobserved, can generally be detected because it produces autocorrelation in the observable residuals. (Errors are also known as "error terms" in econometrics.) Autocorrelation of the errors violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem does not apply, and that OLS estimators are no longer the Best Linear Unbiased Estimators (BLUE). While it does not bias the OLS coefficient estimates, the standard errors tend to be underestimated (and the t-scores overestimated) when the autocorrelations of the errors at low lags are positive.
The traditional test for the presence of first-order autocorrelation is the Durbin–Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch–Godfrey test. This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where 'k' is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptotically distributed as with k degrees of freedom.
Responses to nonzero autocorrelation include generalized least squares and the Newey–West HAC estimator (Heteroskedasticity and Autocorrelation Consistent).
In the estimation of a moving average model (MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of order q, we have , for , and , for .
Applications
Autocorrelation analysis is used heavily in fluorescence correlation spectroscopy to provide quantitative insight into molecular-level diffusion and chemical reactions.
Another application of autocorrelation is the measurement of optical spectra and the measurement of very-short-duration light pulses produced by lasers, both using optical autocorrelators.
Autocorrelation is used to analyze dynamic light scattering data, which notably enables determination of the particle size distributions of nanometer-sized particles or micelles suspended in a fluid. A laser shining into the mixture produces a speckle pattern that results from the motion of the particles. Autocorrelation of the signal can be analyzed in terms of the diffusion of the particles. From this, knowing the viscosity of the fluid, the sizes of the particles can be calculated.
Utilized in the GPS system to correct for the propagation delay, or time shift, between the point of time at the transmission of the carrier signal at the satellites, and the point of time at the receiver on the ground. This is done by the receiver generating a replica signal of the 1,023-bit C/A (Coarse/Acquisition) code, and generating lines of code chips [-1,1] in packets of ten at a time, or 10,230 chips (1,023 × 10), shifting slightly as it goes along in order to accommodate for the doppler shift in the incoming satellite signal, until the receiver replica signal and the satellite signal codes match up.
The small-angle X-ray scattering intensity of a nanostructured system is the Fourier transform of the spatial autocorrelation function of the electron density.
In surface science and scanning probe microscopy, autocorrelation is used to establish a link between surface morphology and functional characteristics.
In optics, normalized autocorrelations and cross-correlations give the degree of coherence of an electromagnetic field.
In signal processing, autocorrelation can give information about repeating events like musical beats (for example, to determine tempo) or pulsar frequencies, though it cannot tell the position in time of the beat. It can also be used to estimate the pitch of a musical tone.
In music recording, autocorrelation is used as a pitch detection algorithm prior to vocal processing, as a distortion effect or to eliminate undesired mistakes and inaccuracies.
Autocorrelation in space rather than time, via the Patterson function, is used by X-ray diffractionists to help recover the "Fourier phase information" on atom positions not available through diffraction alone.
In statistics, spatial autocorrelation between sample locations also helps one estimate mean value uncertainties when sampling a heterogeneous population.
The SEQUEST algorithm for analyzing mass spectra makes use of autocorrelation in conjunction with cross-correlation to score the similarity of an observed spectrum to an idealized spectrum representing a peptide.
In astrophysics, autocorrelation is used to study and characterize the spatial distribution of galaxies in the universe and in multi-wavelength observations of low mass X-ray binaries.
In panel data, spatial autocorrelation refers to correlation of a variable with itself through space.
In analysis of Markov chain Monte Carlo data, autocorrelation must be taken into account for correct error determination.
In geosciences (specifically in geophysics) it can be used to compute an autocorrelation seismic attribute, out of a 3D seismic survey of the underground.
In medical ultrasound imaging, autocorrelation is used to visualize blood flow.
In intertemporal portfolio choice, the presence or absence of autocorrelation in an asset's rate of return can affect the optimal portion of the portfolio to hold in that asset.
Autocorrelation has been used to accurately measure power system frequency in numerical relays.
Serial dependence
Serial dependence is closely linked to the notion of autocorrelation, but represents a distinct concept (see Correlation and dependence). In particular, it is possible to have serial dependence but no (linear) correlation. In some fields however, the two terms are used as synonyms.
A time series of a random variable has serial dependence if the value at some time in the series is statistically dependent on the value at another time . A series is serially independent if there is no dependence between any pair.
If a time series is stationary, then statistical dependence between the pair would imply that there is statistical dependence between all pairs of values at the same lag .
See also
Autocorrelation matrix
Autocorrelation of a formal word
Autocorrelation technique
Autocorrelator
Cochrane–Orcutt estimation (transformation for autocorrelated error terms)
Correlation function
Correlogram
Cross-correlation
CUSUM
Fluorescence correlation spectroscopy
Optical autocorrelation
Partial autocorrelation function
Phylogenetic autocorrelation (Galton's problem}
Pitch detection algorithm
Prais–Winsten transformation
Scaled correlation
Triple correlation
Unbiased estimation of standard deviation
References
Further reading
Mojtaba Soltanalian, and Petre Stoica. "Computational design of sequences with good correlation properties." IEEE Transactions on Signal Processing, 60.5 (2012): 2180–2193.
Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005.
Klapetek, Petr (2018). Quantitative Data Processing in Scanning Probe Microscopy: SPM Applications for Nanometrology (Second ed.). Elsevier. pp. 108–112 .
Signal processing
Time domain analysis
|
https://en.wikipedia.org/wiki/Aspartame
|
Aspartame is an artificial non-saccharide sweetener 200 times sweeter than sucrose and is commonly used as a sugar substitute in foods and beverages. It is a methyl ester of the aspartic acid/phenylalanine dipeptide with brand names NutraSweet, Equal, and Canderel. Aspartame was approved by the US Food and Drug Administration (FDA) in 1974, and then again in 1981, after approval was revoked in 1980.
Aspartame is one of the most studied food additives in the human food supply. Reviews by over 100 governmental regulatory bodies found the ingredient safe for consumption at the normal acceptable daily intake (ADI) limit.
Uses
Aspartame is around 180 to 200 times sweeter than sucrose (table sugar). Due to this property, even though aspartame produces of energy per gram when metabolized, about the same as sucrose, the quantity of aspartame needed to produce a sweet taste is so small that its caloric contribution is negligible. The sweetness of aspartame lasts longer than that of sucrose, so it is often blended with other artificial sweeteners such as acesulfame potassium to produce an overall taste more like that of sugar.
Like many other peptides, aspartame may hydrolyze (break down) into its constituent amino acids under conditions of elevated temperature or high pH. This makes aspartame undesirable as a baking sweetener and prone to degradation in products hosting a high pH, as required for a long shelf life. The stability of aspartame under heating can be improved to some extent by encasing it in fats or in maltodextrin. The stability when dissolved in water depends markedly on pH. At room temperature, it is most stable at pH 4.3, where its half-life is nearly 300 days. At pH 7, however, its half-life is only a few days. Most soft-drinks have a pH between 3 and 5, where aspartame is reasonably stable. In products that may require a longer shelf life, such as syrups for fountain beverages, aspartame is sometimes blended with a more stable sweetener, such as saccharin.
Descriptive analyses of solutions containing aspartame report a sweet aftertaste as well as bitter and off-flavor aftertastes.
Acceptable levels of consumption
The acceptable daily intake (ADI) value for food additives, including aspartame, is defined as the "amount of a food additive, expressed on a body weight basis, that can be ingested daily over a lifetime without appreciable health risk". The Joint FAO/WHO Expert Committee on Food Additives (JECFA) and the European Commission's Scientific Committee on Food (later becoming EFSA) have determined this value is 40 mg/kg of body weight per day for aspartame, while the FDA has set its ADI for aspartame at 50 mg/kg per day an amount equated to consuming 75 packets of commercial aspartame sweetener per day to be within a safe upper limit.
The primary source for exposure to aspartame in the US is diet soft drinks, though it can be consumed in other products, such as pharmaceutical preparations, fruit drinks, and chewing gum among others in smaller quantities. A can of diet soda contains of aspartame, and, for a adult, it takes approximately 21 cans of diet soda daily to consume the of aspartame that would surpass the FDA's 50 mg/kg of body weight ADI of aspartame from diet soda alone.
Reviews have analyzed studies which have looked at the consumption of aspartame in countries worldwide, including the US, countries in Europe, and Australia, among others. These reviews have found that even the high levels of intake of aspartame, studied across multiple countries and different methods of measuring aspartame consumption, are well below the ADI for safe consumption of aspartame. Reviews have also found that populations that are believed to be especially high consumers of aspartame, such as children and diabetics, are below the ADI for safe consumption, even considering extreme worst-case scenario calculations of consumption.
In a report released on 10 December 2013, the EFSA said that, after an extensive examination of evidence, it ruled out the "potential risk of aspartame causing damage to genes and inducing cancer" and deemed the amount found in diet sodas safe to consume.
Safety and health effects
The safety of aspartame has been studied since its discovery, and it is a rigorously tested food ingredient. Aspartame has been deemed safe for human consumption by over 100 regulatory agencies in their respective countries, including the US Food and Drug Administration (FDA), UK Food Standards Agency, the European Food Safety Authority (EFSA), Health Canada, and Food Standards Australia New Zealand.
Metabolism and body weight
reviews of clinical trials showed that using aspartame (or other non-nutritive sweeteners) in place of sugar reduces calorie intake and body weight in adults and children. A 2017 review of metabolic effects by consuming aspartame found that it did not affect blood glucose, insulin, total cholesterol, triglycerides, calorie intake, or body weight. While high-density lipoprotein levels were higher compared to control, they were lower compared to sucrose.
In 2023, the World Health Organization recommended against the use of common non-saccharide sweeteners (NSS), including aspartame, to control body weight or lower the risk of non-communicable diseases, stating: "The recommendation is based on the findings of a systematic review of the available evidence which suggests that use of NSS does not confer any long-term benefit in reducing body fat in adults or children. Results of the review also suggest that there may be potential undesirable effects from long-term use of NSS, such as an increased risk of type 2 diabetes, cardiovascular diseases, and mortality in adults."
Phenylalanine
High levels of the naturally occurring essential amino acid phenylalanine are a health hazard to those born with phenylketonuria (PKU), a rare inherited disease that prevents phenylalanine from being properly metabolized. Because aspartame contains a small amount of phenylalanine, foods containing aspartame sold in the US must state: "Phenylketonurics: Contains Phenylalanine" on product labels.
In the UK, foods that contain aspartame are required by the Food Standards Agency to list the substance as an ingredient, with the warning "Contains a source of phenylalanine". Manufacturers are also required to print "with sweetener(s)" on the label close to the main product name on foods that contain "sweeteners such as aspartame" or "with sugar and sweetener(s)" on "foods that contain both sugar and sweetener".
In Canada, foods that contain aspartame are required to list aspartame among the ingredients, include the amount of aspartame per serving, and state that the product contains phenylalanine.
Phenylalanine is one of the essential amino acids and is required for normal growth and maintenance of life. Concerns about the safety of phenylalanine from aspartame for those without phenylketonuria center largely on hypothetical changes in neurotransmitter levels as well as ratios of neurotransmitters to each other in the blood and brain that could lead to neurological symptoms. Reviews of the literature have found no consistent findings to support such concerns, and, while high doses of aspartame consumption may have some biochemical effects, these effects are not seen in toxicity studies to suggest aspartame can adversely affect neuronal function. As with methanol and aspartic acid, common foods in the typical diet, such as milk, meat, and fruits, will lead to ingestion of significantly higher amounts of phenylalanine than would be expected from aspartame consumption.
Cancer
, regulatory agencies, including the FDA and EFSA, and the US National Cancer Institute, have concluded that consuming aspartame is safe in amounts within acceptable daily intake levels and does not cause cancer. These conclusions are based on various sources of evidence, such as reviews and epidemiological studies finding no association between aspartame and cancer.
In July 2023, scientists for the International Agency for Research on Cancer (IARC) concluded that there was "limited evidence" for aspartame causing cancer in humans, classifying the sweetener as Group 2B (possibly carcinogenic). The lead investigator of the IARC report stated that the classification "shouldn't really be taken as a direct statement that indicates that there is a known cancer hazard from consuming aspartame. This is really more of a call to the research community to try to better clarify and understand the carcinogenic hazard that may or may not be posed by aspartame consumption."
The Joint FAO/WHO Expert Committee on Food Additives (JECFA) added that the limited cancer assessment indicated no reason to change the recommended acceptable daily intake level of 40 mg per kg of body weight per day, reaffirming the safety of consuming aspartame within this limit.
The FDA responded to the report by stating:
Neurotoxicity symptoms
Reviews found no evidence that low doses of aspartame would plausibly lead to neurotoxic effects. A review of studies on children did not show any significant findings for safety concerns with regard to neuropsychiatric conditions such as panic attacks, mood changes, hallucinations, attention deficit hyperactivity disorder (ADHD), or seizures by consuming aspartame.
Headaches
Reviews have found little evidence to indicate that aspartame induces headaches, although certain subsets of consumers may be sensitive to it.
Water quality
Aspartame passes through wastewater treatment plants mainly unchanged.
Mechanism of action
The perceived sweetness of aspartame (and other sweet substances like acesulfame potassium) in humans is due to its binding of the heterodimer G protein-coupled receptor formed by the proteins TAS1R2 and TAS1R3. Aspartame is not recognized by rodents due to differences in the taste receptors.
Metabolites
Aspartame is rapidly hydrolyzed in the small intestine by digestive enzymes which break aspartame down into methanol, phenylalanine, aspartic acid, and further metabolites, such as formaldehyde and formic acid. Due to its rapid and complete metabolism, aspartame is not found in circulating blood, even following ingestion of high doses over 200 mg/kg.
Aspartic acid
Aspartic acid (aspartate) is one of the most common amino acids in the typical diet. As with methanol and phenylalanine, intake of aspartic acid from aspartame is less than would be expected from other dietary sources. At the 90th percentile of intake, aspartame provides only between 1% and 2% of the daily intake of aspartic acid.
Methanol
The methanol produced by aspartame metabolism is unlikely to be a safety concern for several reasons. The amount of methanol produced from aspartame-sweetened foods and beverages is likely to be less than that from food sources already in diets. With regard to formaldehyde, it is rapidly converted in the body, and the amounts of formaldehyde from the metabolism of aspartame are trivial when compared to the amounts produced routinely by the human body and from other foods and drugs. At the highest expected human doses of consumption of aspartame, there are no increased blood levels of methanol or formic acid, and ingesting aspartame at the 90th percentile of intake would produce 25 times less methanol than what would be considered toxic.
Chemistry
Aspartame is a methyl ester of the dipeptide of the natural amino acids L-aspartic acid and L-phenylalanine. Under strongly acidic or alkaline conditions, aspartame may generate methanol by hydrolysis. Under more severe conditions, the peptide bonds are also hydrolyzed, resulting in free amino acids.
Two approaches to synthesis are used commercially. In the chemical synthesis, the two carboxyl groups of aspartic acid are joined into an anhydride, and the amino group is protected with a formyl group as the formamide, by treatment of aspartic acid with a mixture of formic acid and acetic anhydride. Phenylalanine is converted to its methyl ester and combined with the N-formyl aspartic anhydride; then the protecting group is removed from aspartic nitrogen by acid hydrolysis. The drawback of this technique is that a byproduct, the bitter-tasting β-form, is produced when the wrong carboxyl group from aspartic acid anhydride links to phenylalanine, with desired and undesired isomer forming in a 4:1 ratio. A process using an enzyme from Bacillus thermoproteolyticus to catalyze the condensation of the chemically altered amino acids will produce high yields without the β-form byproduct. A variant of this method, which has not been used commercially, uses unmodified aspartic acid but produces low yields. Methods for directly producing aspartyl-phenylalanine by enzymatic means, followed by chemical methylation, have also been tried but not scaled for industrial production.
History
Aspartame was discovered in 1965 by James M. Schlatter, a chemist working for G.D. Searle & Company. Schlatter had synthesized aspartame as an intermediate step in generating a tetrapeptide of the hormone gastrin, for use in assessing an anti-ulcer drug candidate. He discovered its sweet taste when he licked his finger, which had become contaminated with aspartame, to lift up a piece of paper. Torunn Atteraas Garin participated in the development of aspartame as an artificial sweetener.
In 1975, prompted by issues regarding Flagyl and Aldactone, an FDA task force team reviewed 25 studies submitted by the manufacturer, including 11 on aspartame. The team reported "serious deficiencies in Searle's operations and practices". The FDA sought to authenticate 15 of the submitted studies against the supporting data. In 1979, the Center for Food Safety and Applied Nutrition (CFSAN) concluded, since many problems with the aspartame studies were minor and did not affect the conclusions, the studies could be used to assess aspartame's safety.
In 1980, the FDA convened a Public Board of Inquiry (PBOI) consisting of independent advisors charged with examining the purported relationship between aspartame and brain cancer. The PBOI concluded aspartame does not cause brain damage, but it recommended against approving aspartame at that time, citing unanswered questions about cancer in laboratory rats.
In 1983, the FDA approved aspartame for use in carbonated beverages and for use in other beverages, baked goods, and confections in 1993. In 1996, the FDA removed all restrictions from aspartame, allowing it to be used in all foods. As of May 2023, the FDA stated that it regards aspartame as a safe food ingredient when consumed within the acceptable daily intake level of 50 mg per kg of body weight per day.
Several European Union countries approved aspartame in the 1980s, with EU-wide approval in 1994. The Scientific Committee on Food (SCF) reviewed subsequent safety studies and reaffirmed the approval in 2002. The European Food Safety Authority (EFSA) reported in 2006 that the previously established Acceptable daily intake (ADI) was appropriate, after reviewing yet another set of studies.
Compendial status
British Pharmacopoeia
United States Pharmacopeia
Commercial uses
Under the brand names Equal, NutraSweet, and Canderel, aspartame is an ingredient in approximately 6,000 consumer foods and beverages sold worldwide, including (but not limited to) diet sodas and other soft drinks, instant breakfasts, breath mints, cereals, sugar-free chewing gum, cocoa mixes, frozen desserts, gelatin desserts, juices, laxatives, chewable vitamin supplements, milk drinks, pharmaceutical drugs and supplements, shake mixes, tabletop sweeteners, teas, instant coffees, topping mixes, wine coolers, and yogurt. It is provided as a table condiment in some countries. Aspartame is less suitable for baking than other sweeteners because it breaks down when heated and loses much of its sweetness.
NutraSweet Company
In 1985, Monsanto bought G.D.Searle, and the aspartame business became a separate Monsanto subsidiary, NutraSweet. In March 2000, Monsanto sold it to J.W. Childs Associates Equity Partners II L.P. European use patents on aspartame expired starting in 1987, and the US patent expired in 1992.
Ajinomoto
Many aspects of industrial synthesis of aspartame were established by Ajinomoto. In 2004, the market for aspartame, in which Ajinomoto, the world's largest aspartame manufacturer, had a 40% share, was a year, and consumption of the product was rising by 2% a year. Ajinomoto acquired its aspartame business in 2000 from Monsanto for $67 million (equivalent to $ in ).
In 2007, Asda was the first British supermarket chain to remove all artificial flavourings and colours in its store brand foods. In 2008, Ajinomoto sued Asda, part of Walmart, for a malicious falsehood action concerning its aspartame product when the substance was listed as excluded from the chain's product line, along with other "nasties". In July 2009, a British court ruled in favor of Asda. In June 2010, an appeals court reversed the decision, allowing Ajinomoto to pursue a case against Asda to protect aspartame's reputation. Asda said that it would continue to use the term "no nasties" on its own-label products, but the suit was settled in 2011 with Asda choosing to remove references to aspartame from its packaging.
In November 2009, Ajinomoto announced a new brand name for its aspartame sweetener—AminoSweet.
Holland Sweetener Company
A joint venture of DSM and Tosoh, the Holland Sweetener Company manufactured aspartame using the enzymatic process developed by Toyo Soda (Tosoh) and sold as the brand Sanecta. Additionally, they developed a combination aspartame-acesulfame salt under the brand name Twinsweet. They left the sweetener industry in 2006, because "global aspartame markets are facing structural oversupply, which has caused worldwide strong price erosion over the last five years", making the business "persistently unprofitable".
Competing products
Because sucralose, unlike aspartame, retains its sweetness after being heated, and has at least twice the shelf life of aspartame, it has become more popular as an ingredient. This, along with differences in marketing and changing consumer preferences, caused aspartame to lose market share to sucralose. In 2004, aspartame traded at about and sucralose, which is roughly three times sweeter by weight, at around .
See also
Alitame
Aspartame controversy
Neotame
Phenylalanine ammonia lyase
Stevia
References
External links
Amino acid derivatives
Aromatic compounds
Beta-Amino acids
Butyramides
Dipeptides
Carboxylate esters
Sugar substitutes
Methyl esters
E-number additives
|
https://en.wikipedia.org/wiki/AutoCAD
|
AutoCAD is a 2D and
3D computer-aided design (CAD) software application for desktop, web, and mobile developed by Autodesk. It was first released in December 1982 for the CP/M and IBM PC platforms as a desktop app running on microcomputers with internal graphics controllers. Initially a DOS application, subsequent versions were later released for other platforms including Classic Mac OS (1992), Microsoft Windows (1992), web browsers (2010), iOS (2010), macOS (2010), and Android (2011).
AutoCAD is a general drafting and design application used in industry by architects, project managers, engineers, graphic designers, city planners and other professionals to prepare technical drawings. After discontinuing the sale of perpetual licenses in January 2016, commercial versions of AutoCAD are licensed through a term-based subscription.
History
Before AutoCAD was introduced, most commercial CAD programs ran on mainframe computers or minicomputers, with each CAD operator (user) working at a separate graphics terminal.
Origins
AutoCAD was derived from a program that began in 1977, and then released in 1979 called Interact CAD, also referred to in early Autodesk documents as MicroCAD, which was written prior to Autodesk's (then Marinchip Software Partners) formation by Autodesk cofounder Michael Riddle.
The first version by Autodesk was demonstrated at the 1982 Comdex and released that December. AutoCAD supported CP/M-80 computers. As Autodesk's flagship product, by March 1986 AutoCAD had become the most ubiquitous CAD program worldwide. The 2022 release marked the 36th major release of AutoCAD for Windows and the 12th consecutive year of AutoCAD for Mac. The native file format of AutoCAD is .dwg. This and, to a lesser extent, its interchange file format DXF, have become de facto, if proprietary, standards for CAD data interoperability, particularly for 2D drawing exchange. AutoCAD has included support for .dwf, a format developed and promoted by Autodesk, for publishing CAD data.
Features
Compatibility with other software
ESRI ArcMap 10 permits export as AutoCAD drawing files. Civil 3D permits export as AutoCAD objects and as LandXML. Third-party file converters exist for specific formats such as Bentley MX GENIO Extension, PISTE Extension (France), ISYBAU (Germany), OKSTRA and Microdrainage (UK); also, conversion of .pdf files is feasible, however, the accuracy of the results may be unpredictable or distorted. For example, jagged edges may appear. Several vendors provide online conversions for free such as Cometdocs.
Language
AutoCAD and AutoCAD LT are available for English, German, French, Italian, Spanish, Japanese, Korean, Chinese Simplified, Chinese Traditional, Brazilian Portuguese, Russian, Czech, Polish and Hungarian (also through additional language packs). The extent of localization varies from full translation of the product to documentation only. The AutoCAD command set is localized as a part of the software localization.
Extensions
AutoCAD supports a number of APIs for customization and automation. These include AutoLISP, Visual LISP, VBA, .NET and ObjectARX. ObjectARX is a C++ class library, which was also the base for:
products extending AutoCAD functionality to specific fields
creating products such as AutoCAD Architecture, AutoCAD Electrical, AutoCAD Civil 3D
third-party AutoCAD-based application
There are a large number of AutoCAD plugins (add-on applications) available on the application store Autodesk Exchange Apps.
AutoCAD's DXF, drawing exchange format, allows importing and exporting drawing information.
Vertical integration
Autodesk has also developed a few vertical programs for discipline-specific enhancements such as:
Advance Steel
AutoCAD Architecture
AutoCAD Electrical
AutoCAD Map 3D
AutoCAD Mechanical
AutoCAD MEP
AutoCAD Plant 3D
Autodesk Civil 3D
Since AutoCAD 2019 several verticals are included with AutoCAD subscription as Industry-Specific Toolset.
For example, AutoCAD Architecture (formerly Architectural Desktop) permits architectural designers to draw 3D objects, such as walls, doors, and windows, with more intelligent data associated with them rather than simple objects, such as lines and circles. The data can be programmed to represent specific architectural products sold in the construction industry, or extracted into a data file for pricing, materials estimation, and other values related to the objects represented.
Additional tools generate standard 2D drawings, such as elevations and sections, from a 3D architectural model. Similarly, Civil Design, Civil Design 3D, and Civil Design Professional support data-specific objects facilitating easy standard civil engineering calculations and representations.
Softdesk Civil was developed as an AutoCAD add-on by a company in New Hampshire called Softdesk (originally DCA). Softdesk was acquired by Autodesk, and Civil became Land Development Desktop (LDD), later renamed Land Desktop. Civil 3D was later developed and Land Desktop was retired.
File formats
AutoCAD's native file formats are denoted either by a .dwg, .dwt, .dws, or .dxf filename extension.
The primary file format for 2D and 3D drawing files created with AutoCAD is .dwg. While other third-party CAD software applications can create .dwg files, AutoCAD uniquely creates RealDWG files.
Using AutoCAD, any .dwg file may be saved to a derivative format. These derivative formats include:
Drawing Template Files .dwt: New .dwg are created from a .dwt file. Although the default template file is acad.dwt for AutoCAD and acadlt.dwt for AutoCAD LT, custom .dwt files may be created to include foundational configurations such as drawing units and layers.
Drawing Standards File .dws: Using the CAD Standards feature of AutoCAD, a Drawing Standards File may be associated to any .dwg or .dwt file to enforce graphical standards.
Drawing Interchange Format .dxf: The .dxf format is an ASCII representation of a .dwg file, and is used to transfer data between various applications.
Variants
AutoCAD LT
AutoCAD LT is the lower-cost version of AutoCAD, with reduced capabilities, first released in November 1993. Autodesk developed AutoCAD LT to have an entry-level CAD package to compete in the lower price level. Priced at $495, it became the first AutoCAD product priced below $1000. It was sold directly by Autodesk and in computer stores unlike the full version of AutoCAD, which must be purchased from official Autodesk dealers. AutoCAD LT 2015 introduced Desktop Subscription service from $360 per year; as of 2018, three subscription plans were available, from $50 a month to a 3-year, $1170 license.
While there are hundreds of small differences between the full AutoCAD package and AutoCAD LT, there are a few recognized major differences in the software's features:
3D capabilities: AutoCAD LT lacks the ability to create, visualize and render 3D models as well as 3D printing.
Network licensing: AutoCAD LT cannot be used on multiple machines over a network.
Customization: AutoCAD LT does not support customization with LISP, ARX, .NET and VBA.
Management and automation capabilities with Sheet Set Manager and Action Recorder.
CAD standards management tools.
AutoCAD Mobile and AutoCAD Web
AutoCAD Mobile and AutoCAD Web (formerly AutoCAD WS and AutoCAD 360) is an account-based mobile and web application enabling registered users to view, edit, and share AutoCAD files via mobile device and web using a limited AutoCAD feature set — and using cloud-stored drawing files. The program, which is an evolution and combination of previous products, uses a freemium business model with a free plan and two paid levels, including various amounts of storage, tools, and online access to drawings. 360 includes new features such as a "Smart Pen" mode and linking to third-party cloud-based storage such as Dropbox. Having evolved from Flash-based software, AutoCAD Web uses HTML5 browser technology available in newer browsers including Firefox and Google Chrome.
AutoCAD WS began with a version for the iPhone and subsequently expanded to include versions for the iPod Touch, iPad, Android phones, and Android tablets. Autodesk released the iOS version in September 2010, following with the Android version on April 20, 2011. The program is available via download at no cost from the App Store (iOS), Google Play (Android) and Amazon Appstore (Android).
In its initial iOS version, AutoCAD WS supported drawing of lines, circles, and other shapes; creation of text and comment boxes; and management of color, layer, and measurements — in both landscape and portrait modes. Version 1.3, released August 17, 2011, added support for unit typing, layer visibility, area measurement and file management. The Android variant includes the iOS feature set along with such unique features as the ability to insert text or captions by voice command as well as manually. Both Android and iOS versions allow the user to save files on-line — or off-line in the absence of an Internet connection.
In 2011, Autodesk announced plans to migrate the majority of its software to "the cloud", starting with the AutoCAD WS mobile application.
According to a 2013 interview with Ilai Rotbaein, an AutoCAD WS product manager for Autodesk, the name AutoCAD WS had no definitive meaning, and was interpreted variously as Autodesk Web Service, White Sheet or Work Space. In 2013, AutoCAD WS was renamed to AutoCAD 360. Later, it was renamed to AutoCAD Web App.
Student versions
AutoCAD is licensed, for free, to students, educators, and educational institutions, with a 12-month renewable license available. Licenses acquired before March 25, 2020 were a 36-month license, with its last renovation on March 24, 2020. The student version of AutoCAD is functionally identical to the full commercial version, with one exception: DWG files created or edited by a student version have an internal bit-flag set (the "educational flag"). When such a DWG file is printed by any version of AutoCAD (commercial or student) older than AutoCAD 2014 SP1 or AutoCAD 2019 and newer, the output includes a plot stamp/banner on all four sides. Objects created in the Student Version cannot be used for commercial use. Student Version objects "infect" a commercial version DWG file if they are imported in versions older than AutoCAD 2015 or newer than AutoCAD 2018.
Ports
Windows
AutoCAD Release 12 in 1992 was the first version of the software to support the Windows platform - in that case Windows 3.1. After Release 14 in 1997, support for MS-DOS, Unix and Macintosh were dropped, and AutoCAD was exclusively Windows supported. In general any new AutoCAD version supports the current Windows version and some older ones. AutoCAD 2016 to 2020 support Windows 7 up to Windows 10.
Mac
Autodesk stopped supporting Apple's Macintosh computers in 1994. Over the next several years, no compatible versions for the Mac were released. In 2010 Autodesk announced that it would once again support Apple's Mac OS X software in the future. Most of the features found in the 2012 Windows version can be found in the 2012 Mac version. The main difference is the user interface and layout of the program. The interface is designed so that users who are already familiar with Apple's macOS software will find it similar to other Mac applications. Autodesk has also built-in various features in order to take full advantage of Apple's Trackpad capabilities as well as the full-screen mode in Apple's OS X Lion. AutoCAD 2012 for Mac supports both the editing and saving of files in DWG formatting that will allow the file to be compatible with other platforms besides macOS. AutoCAD 2019 for Mac requires OS X El Capitan or later.
AutoCAD LT 2013 was available through the Mac App Store for $899.99. The full-featured version of AutoCAD 2013 for Mac, however, wasn't available through the Mac App Store due to the price limit of $999 set by Apple. AutoCAD 2014 for Mac was available for purchase from Autodesk's web site for $4,195 and AutoCAD LT 2014 for Mac for $1,200, or from an Autodesk authorized reseller. The latest version available for Mac is AutoCAD 2022 as of January 2022.
Version history
See also
Autodesk 3ds Max
Autodesk Maya
Autodesk Revit
AutoShade
AutoSketch
Comparison of computer-aided design software
Design Web Format
Open source CAD software:
LibreCAD
FreeCAD
BRL-CAD
References
Further reading
External links
Engineering and Design Firm
Autodesk products
1982 software
IRIX software
Computer-aided design software
IOS software
Classic Mac OS software
Android (operating system) software
MacOS computer-aided design software
Software that uses Qt
|
https://en.wikipedia.org/wiki/Alkene
|
In organic chemistry, an alkene is a hydrocarbon containing a carbon–carbon double bond. The double bond may be internal or in the terminal position. Terminal alkenes are also known as α-olefins.
The International Union of Pure and Applied Chemistry (IUPAC) recommends using the name "alkene" only for acyclic hydrocarbons with just one double bond; alkadiene, alkatriene, etc., or polyene for acyclic hydrocarbons with two or more double bonds; cycloalkene, cycloalkadiene, etc. for cyclic ones; and "olefin" for the general class – cyclic or acyclic, with one or more double bonds.
Acyclic alkenes, with only one double bond and no other functional groups (also known as mono-enes) form a homologous series of hydrocarbons with the general formula with n being 2 or more (which is two hydrogens less than the corresponding alkane). When n is four or more, isomers are possible, distinguished by the position and conformation of the double bond.
Alkenes are generally colorless non-polar compounds, somewhat similar to alkanes but more reactive. The first few members of the series are gases or liquids at room temperature. The simplest alkene, ethylene () (or "ethene" in the IUPAC nomenclature) is the organic compound produced on the largest scale industrially.
Aromatic compounds are often drawn as cyclic alkenes, however their structure and properties are sufficiently distinct that they are not classified as alkenes or olefins. Hydrocarbons with two overlapping double bonds () are called allenes—the simplest such compound is itself called allene—and those with three or more overlapping bonds (, , etc.) are called cumulenes.
Structural isomerism
Alkenes having four or more carbon atoms can form diverse structural isomers. Most alkenes are also isomers of cycloalkanes. Acyclic alkene structural isomers with only one double bond follow:
: ethylene only
: propylene only
: 3 isomers: 1-butene, 2-butene, and isobutylene
: 5 isomers: 1-pentene, 2-pentene, 2-methyl-1-butene, 3-methyl-1-butene, 2-methyl-2-butene
: 13 isomers: 1-hexene, 2-hexene, 3-hexene, 2-methyl-1-pentene, 3-methyl-1-pentene, 4-methyl-1-pentene, 2-methyl-2-pentene, 3-methyl-2-pentene, 4-methyl-2-pentene, 2,3-dimethyl-1-butene, 3,3-dimethyl-1-butene, 2,3-dimethyl-2-butene, 2-ethyl-1-butene
Many of these molecules exhibit cis–trans isomerism. There may also be chiral carbon atoms particularly within the larger molecules (from ). The number of potential isomers increases rapidly with additional carbon atoms.
Structure and bonding
Bonding
A carbon–carbon double bond consists of a sigma bond and a pi bond. This double bond is stronger than a single covalent bond (611 kJ/mol for C=C vs. 347 kJ/mol for C–C), but not twice as strong. Double bonds are shorter than single bonds with an average bond length of 1.33 Å (133 pm) vs 1.53 Å for a typical C-C single bond.
Each carbon atom of the double bond uses its three sp2 hybrid orbitals to form sigma bonds to three atoms (the other carbon atom and two hydrogen atoms). The unhybridized 2p atomic orbitals, which lie perpendicular to the plane created by the axes of the three sp2 hybrid orbitals, combine to form the pi bond. This bond lies outside the main C–C axis, with half of the bond on one side of the molecule and a half on the other. With a strength of 65 kcal/mol, the pi bond is significantly weaker than the sigma bond.
Rotation about the carbon–carbon double bond is restricted because it incurs an energetic cost to break the alignment of the p orbitals on the two carbon atoms. Consequently cis or trans isomers interconvert so slowly that they can be freely handled at ambient conditions without isomerization. More complex alkenes may be named with the E–Z notation for molecules with three or four different substituents (side groups). For example, of the isomers of butene, the two methyl groups of (Z)-but-2-ene (a.k.a. cis-2-butene) appear on the same side of the double bond, and in (E)-but-2-ene (a.k.a. trans-2-butene) the methyl groups appear on opposite sides. These two isomers of butene have distinct properties.
Shape
As predicted by the VSEPR model of electron pair repulsion, the molecular geometry of alkenes includes bond angles about each carbon atom in a double bond of about 120°. The angle may vary because of steric strain introduced by nonbonded interactions between functional groups attached to the carbon atoms of the double bond. For example, the C–C–C bond angle in propylene is 123.9°.
For bridged alkenes, Bredt's rule states that a double bond cannot occur at the bridgehead of a bridged ring system unless the rings are large enough. Following Fawcett and defining S as the total number of non-bridgehead atoms in the rings, bicyclic systems require S ≥ 7 for stability and tricyclic systems require S ≥ 11.
Isomerism
In organic chemistry,the prefixes cis- and trans- are used to describe the positions of functional groups attached to carbon atoms joined by a double bond. In Latin, cis and trans mean "on this side of" and "on the other side of" respectively. Therefore, if the functional groups are both on the same side of the carbon chain, the bond is said to have cis- configuration, otherwise (i.e. the functional groups are on the opposite side of the carbon chain), the bond is said to have trans- configuration.
For there to be cis- and trans- configurations, there must be a carbon chain, or at least one functional group attached to each carbon is the same for both. E- and Z- configuration can be used instead in a more general case where all four functional groups attached to carbon atoms in a double bond are different. E- and Z- are abbreviations of German words zusammen (together) and entgegen (opposite). In E- and Z-isomerism, each functional group is assigned a priority based on the Cahn–Ingold–Prelog priority rules. If the two groups with higher priority are on the same side of the double bond, the bond is assigned Z- configuration, otherwise (i.e. the two groups with higher priority are on the opposite side of the double bond), the bond is assigned E- configuration. Cis- and trans- configurations do not have a fixed relationship with E- and Z-configurations.
Physical properties
Many of the physical properties of alkenes and alkanes are similar: they are colorless, nonpolar, and combustible. The physical state depends on molecular mass: like the corresponding saturated hydrocarbons, the simplest alkenes (ethylene, propylene, and butene) are gases at room temperature. Linear alkenes of approximately five to sixteen carbon atoms are liquids, and higher alkenes are waxy solids. The melting point of the solids also increases with increase in molecular mass.
Alkenes generally have stronger smells than their corresponding alkanes. Ethylene has a sweet and musty odor. Strained alkenes, in particular, like norbornene and trans-cyclooctene are known to have strong, unpleasant odors, a fact consistent with the stronger π complexes they form with metal ions including copper.
Boiling and melting points
Below is a list of the boiling and melting points of various alkenes with the corresponding alkane and alkyne analogues.
Infrared spectroscopy
The stretching of C=C bond will give an IR absorption peak at 1670–1600 cm−1, while the bending of C=C bond absorbs between 1000 and 650 cm−1 wavelength.
NMR spectroscopy
In 1H NMR spectroscopy, the hydrogen bonded to the carbon adjacent to double bonds will give a δH of 4.5–6.5 ppm. The double bond will also deshield the hydrogen attached to the carbons adjacent to sp2 carbons, and this generates δH=1.6–2. ppm peaks. Cis/trans isomers are distinguishable due to different J-coupling effect. Cis vicinal hydrogens will have coupling constants in the range of 6–14 Hz, whereas the trans will have coupling constants of 11–18 Hz.
In their 13C NMR spectra of alkenes, double bonds also deshield the carbons, making them have low field shift. C=C double bonds usually have chemical shift of about 100–170 ppm.
Combustion
Like most other hydrocarbons, alkenes combust to give carbon dioxide and water.
The combustion of alkenes release less energy than burning same molarity of saturated ones with same number of carbons.
This trend can be clearly seen in the list of standard enthalpy of combustion of hydrocarbons.
Reactions
Alkenes are relatively stable compounds, but are more reactive than alkanes. Most reactions of alkenes involve additions to this pi bond, forming new single bonds. Alkenes serve as a feedstock for the petrochemical industry because they can participate in a wide variety of reactions, prominently polymerization and alkylation. Except for ethylene, alkenes have two sites of reactivity: the carbon–carbon pi-bond and the presence of allylic CH centers. The former dominates but the allylic sites are important too.
Addition to the unsaturated bonds
Hydrogenation involves the addition of H2 resulting in an alkane. The equation of hydrogenation of ethylene to form ethane is:
H2C=CH2 + H2→H3C−CH3
Hydrogenation reactions usually require catalysts to increase their reaction rate. The total number of hydrogens that can be added to an unsaturated hydrocarbon depends on its degree of unsaturation.
Similar to hydrogen, halogens added to double bonds.
H2C=CH2 + Br2→H2CBr−CH2Br
Halonium ions are intermediates. These reactions do not require catalysts.
Bromine test is used to test the saturation of hydrocarbons. The bromine test can also be used as an indication of the degree of unsaturation for unsaturated hydrocarbons. Bromine number is defined as gram of bromine able to react with 100g of product. Similar as hydrogenation, the halogenation of bromine is also depend on the number of π bond. A higher bromine number indicates higher degree of unsaturation.
The π bonds of alkenes hydrocarbons are also susceptible to hydration. The reaction usually involves strong acid as catalyst. The first step in hydration often involves formation of a carbocation. The net result of the reaction will be an alcohol. The reaction equation for hydration of ethylene is:
H2C=CH2 + H2O→
Hydrohalogenation involves addition of H−X to unsaturated hydrocarbons. This reaction results in new C−H and C−X σ bonds. The formation of the intermediate carbocation is selective and follows Markovnikov's rule. The hydrohalogenation of alkene will result in haloalkane. The reaction equation of HBr addition to ethylene is:
H2C=CH2 + HBr →
Cycloaddition
Alkenes add to dienes to give cyclohexenes. This conversion is an example of a Diels-Alder reaction. Such reaction proceed with retention of stereochemistry. The rates are sensitive to electron-withdrawing or electron-donating substituents. When irradiated by UV-light, alkenes dimerize to give cyclobutanes. Another example is the Schenck ene reaction, in which singlet oxygen reacts with an allylic structure to give a transposed allyl peroxide:
Oxidation
Alkenes react with percarboxylic acids and even hydrogen peroxide to yield epoxides:
For ethylene, the epoxidation is conducted on a very large scale industrially using oxygen in the presence of silver-based catalysts:
Alkenes react with ozone, leading to the scission of the double bond. The process is called ozonolysis. Often the reaction procedure includes a mild reductant, such as dimethylsulfide ():
When treated with a hot concentrated, acidified solution of , alkenes are cleaved to form ketones and/or carboxylic acids. The stoichiometry of the reaction is sensitive to conditions. This reaction and the ozonolysis can be used to determine the position of a double bond in an unknown alkene.
The oxidation can be stopped at the vicinal diol rather than full cleavage of the alkene by using osmium tetroxide or other oxidants:
R'CH=CR2 + 1/2 O2 + H2O -> R'CH(OH)-C(OH)R2
This reaction is called dihydroxylation.
In the presence of an appropriate photosensitiser, such as methylene blue and light, alkenes can undergo reaction with reactive oxygen species generated by the photosensitiser, such as hydroxyl radicals, singlet oxygen or superoxide ion. Reactions of the excited sensitizer can involve electron or hydrogen transfer, usually with a reducing substrate (Type I reaction) or interaction with oxygen (Type II reaction). These various alternative processes and reactions can be controlled by choice of specific reaction conditions, leading to a wide range of products. A common example is the [4+2]-cycloaddition of singlet oxygen with a diene such as cyclopentadiene to yield an endoperoxide:
Polymerization
Terminal alkenes are precursors to polymers via processes termed polymerization. Some polymerizations are of great economic significance, as they generate the plastics polyethylene and polypropylene. Polymers from alkene are usually referred to as polyolefins although they contain no olefins. Polymerization can proceed via diverse mechanisms. Conjugated dienes such as buta-1,3-diene and isoprene (2-methylbuta-1,3-diene) also produce polymers, one example being natural rubber.
Allylic substitution
The presence of a C=C π bond in unsaturated hydrocarbons weakens the dissociation energy of the allylic C−H bonds. Thus, these groupings are susceptible to free radical substitution at these C-H sites as well as addition reactions at the C=C site. In the presence of radical initiators, allylic C-H bonds can be halogenated. The presence of two C=C bonds flanking one methylene, i.e., doubly allylic, results in particularly weak HC-H bonds. The high reactivity of these situations is the basis for certain free radical reactions, manifested in the chemistry of drying oils.
Metathesis
Alkenes undergo olefin metathesis, which cleaves and interchanges the substituents of the alkene. A related reaction is ethenolysis:
Metal complexation
In transition metal alkene complexes, alkenes serve as ligands for metals. In this case, the π electron density is donated to the metal d orbitals. The stronger the donation is, the stronger the back bonding from the metal d orbital to π* anti-bonding orbital of the alkene. This effect lowers the bond order of the alkene and increases the C-C bond length. One example is the complex . These complexes are related to the mechanisms of metal-catalyzed reactions of unsaturated hydrocarbons.
Reaction overview
Synthesis
Industrial methods
Alkenes are produced by hydrocarbon cracking. Raw materials are mostly natural gas condensate components (principally ethane and propane) in the US and Mideast and naphtha in Europe and Asia. Alkanes are broken apart at high temperatures, often in the presence of a zeolite catalyst, to produce a mixture of primarily aliphatic alkenes and lower molecular weight alkanes. The mixture is feedstock and temperature dependent, and separated by fractional distillation. This is mainly used for the manufacture of small alkenes (up to six carbons).
Related to this is catalytic dehydrogenation, where an alkane loses hydrogen at high temperatures to produce a corresponding alkene. This is the reverse of the catalytic hydrogenation of alkenes.
This process is also known as reforming. Both processes are endothermic and are driven towards the alkene at high temperatures by entropy.
Catalytic synthesis of higher α-alkenes (of the type RCH=CH2) can also be achieved by a reaction of ethylene with the organometallic compound triethylaluminium in the presence of nickel, cobalt, or platinum.
Elimination reactions
One of the principal methods for alkene synthesis in the laboratory is the room elimination of alkyl halides, alcohols, and similar compounds. Most common is the β-elimination via the E2 or E1 mechanism, but α-eliminations are also known.
The E2 mechanism provides a more reliable β-elimination method than E1 for most alkene syntheses. Most E2 eliminations start with an alkyl halide or alkyl sulfonate ester (such as a tosylate or triflate). When an alkyl halide is used, the reaction is called a dehydrohalogenation. For unsymmetrical products, the more substituted alkenes (those with fewer hydrogens attached to the C=C) tend to predominate (see Zaitsev's rule). Two common methods of elimination reactions are dehydrohalogenation of alkyl halides and dehydration of alcohols. A typical example is shown below; note that if possible, the H is anti to the leaving group, even though this leads to the less stable Z-isomer.
Alkenes can be synthesized from alcohols via dehydration, in which case water is lost via the E1 mechanism. For example, the dehydration of ethanol produces ethylene:
CH3CH2OH → H2C=CH2 + H2O
An alcohol may also be converted to a better leaving group (e.g., xanthate), so as to allow a milder syn-elimination such as the Chugaev elimination and the Grieco elimination. Related reactions include eliminations by β-haloethers (the Boord olefin synthesis) and esters (ester pyrolysis).
Alkenes can be prepared indirectly from alkyl amines. The amine or ammonia is not a suitable leaving group, so the amine is first either alkylated (as in the Hofmann elimination) or oxidized to an amine oxide (the Cope reaction) to render a smooth elimination possible. The Cope reaction is a syn-elimination that occurs at or below 150 °C, for example:
The Hofmann elimination is unusual in that the less substituted (non-Zaitsev) alkene is usually the major product.
Alkenes are generated from α-halosulfones in the Ramberg–Bäcklund reaction, via a three-membered ring sulfone intermediate.
Synthesis from carbonyl compounds
Another important method for alkene synthesis involves construction of a new carbon–carbon double bond by coupling of a carbonyl compound (such as an aldehyde or ketone) to a carbanion equivalent. Such reactions are sometimes called olefinations. The most well-known of these methods is the Wittig reaction, but other related methods are known, including the Horner–Wadsworth–Emmons reaction.
The Wittig reaction involves reaction of an aldehyde or ketone with a Wittig reagent (or phosphorane) of the type Ph3P=CHR to produce an alkene and Ph3P=O. The Wittig reagent is itself prepared easily from triphenylphosphine and an alkyl halide. The reaction is quite general and many functional groups are tolerated, even esters, as in this example:
Related to the Wittig reaction is the Peterson olefination, which uses silicon-based reagents in place of the phosphorane. This reaction allows for the selection of E- or Z-products. If an E-product is desired, another alternative is the Julia olefination, which uses the carbanion generated from a phenyl sulfone. The Takai olefination based on an organochromium intermediate also delivers E-products. A titanium compound, Tebbe's reagent, is useful for the synthesis of methylene compounds; in this case, even esters and amides react.
A pair of ketones or aldehydes can be deoxygenated to generate an alkene. Symmetrical alkenes can be prepared from a single aldehyde or ketone coupling with itself, using titanium metal reduction (the McMurry reaction). If different ketones are to be coupled, a more complicated method is required, such as the Barton–Kellogg reaction.
A single ketone can also be converted to the corresponding alkene via its tosylhydrazone, using sodium methoxide (the Bamford–Stevens reaction) or an alkyllithium (the Shapiro reaction).
Synthesis from alkenes
The formation of longer alkenes via the step-wise polymerisation of smaller ones is appealing, as ethylene (the smallest alkene) is both inexpensive and readily available, with hundreds of millions of tonnes produced annually. The Ziegler–Natta process allows for the formation of very long chains, for instance those used for polyethylene. Where shorter chains are wanted, as they for the production of surfactants, then processes incorporating a olefin metathesis step, such as the Shell higher olefin process are important.
Olefin metathesis is also used commercially for the interconversion of ethylene and 2-butene to propylene. Rhenium- and molybdenum-containing heterogeneous catalysis are used in this process:
CH2=CH2 + CH3CH=CHCH3 → 2 CH2=CHCH3
Transition metal catalyzed hydrovinylation is another important alkene synthesis process starting from alkene itself. It involves the addition of a hydrogen and a vinyl group (or an alkenyl group) across a double bond.
From alkynes
Reduction of alkynes is a useful method for the stereoselective synthesis of disubstituted alkenes. If the cis-alkene is desired, hydrogenation in the presence of Lindlar's catalyst (a heterogeneous catalyst that consists of palladium deposited on calcium carbonate and treated with various forms of lead) is commonly used, though hydroboration followed by hydrolysis provides an alternative approach. Reduction of the alkyne by sodium metal in liquid ammonia gives the trans-alkene.
For the preparation multisubstituted alkenes, carbometalation of alkynes can give rise to a large variety of alkene derivatives.
Rearrangements and related reactions
Alkenes can be synthesized from other alkenes via rearrangement reactions. Besides olefin metathesis (described above), many pericyclic reactions can be used such as the ene reaction and the Cope rearrangement.
In the Diels–Alder reaction, a cyclohexene derivative is prepared from a diene and a reactive or electron-deficient alkene.
Application
Unsaturated hydrocarbons are widely used to produce plastics, medicines, and other useful materials.
Natural occurrence
Alkenes are pervasive in nature.
Plants are the main natural source of alkenes in the form of terpenes. Many of the most vivid natural pigments are terpenes; e.g. lycopene (red in tomatoes) and carotene (orange of carrots). The simplest of all alkenes, ethylene (plant hormone) is a signaling molecule that influences the ripening of plants.
IUPAC Nomenclature
Although the nomenclature is not followed widely, according to IUPAC, an alkene is an acyclic hydrocarbon with just one double bond between carbon atoms. Olefins comprise a larger collection of cyclic and acyclic alkenes as well as dienes and polyenes.
To form the root of the IUPAC names for straight-chain alkenes, change the -an- infix of the parent to -en-. For example, CH3-CH3 is the alkane ethANe. The name of CH2=CH2 is therefore ethENe.
For straight-chain alkenes with 4 or more carbon atoms, that name does not completely identify the compound. For those cases, and for branched acyclic alkenes, the following rules apply:
Find the longest carbon chain in the molecule. If that chain does not contain the double bond, name the compound according to the alkane naming rules. Otherwise:
Number the carbons in that chain starting from the end that is closest to the double bond.
Define the location k of the double bond as being the number of its first carbon.
Name the side groups (other than hydrogen) according to the appropriate rules.
Define the position of each side group as the number of the chain carbon it is attached to.
Write the position and name of each side group.
Write the names of the alkane with the same chain, replacing the "-ane" suffix by "k-ene".
The position of the double bond is often inserted before the name of the chain (e.g. "2-pentene"), rather than before the suffix ("pent-2-ene").
The positions need not be indicated if they are unique. Note that the double bond may imply a different chain numbering than that used for the corresponding alkane: C–– is "2,2-dimethyl pentane", whereas C–= is "3,3-dimethyl 1-pentene".
More complex rules apply for polyenes and cycloalkenes.
Cis–trans isomerism
If the double bond of an acyclic mono-ene is not the first bond of the chain, the name as constructed above still does not completely identify the compound, because of cis–trans isomerism. Then one must specify whether the two single C–C bonds adjacent to the double bond are on the same side of its plane, or on opposite sides. For monoalkenes, the configuration is often indicated by the prefixes cis- (from Latin "on this side of") or trans- ("across", "on the other side of") before the name, respectively; as in cis-2-pentene or trans-2-butene.
More generally, cis–trans isomerism will exist if each of the two carbons of in the double bond has two different atoms or groups attached to it. Accounting for these cases, the IUPAC recommends the more general E–Z notation, instead of the cis and trans prefixes. This notation considers the group with highest CIP priority in each of the two carbons. If these two groups are on opposite sides of the double bond's plane, the configuration is labeled E (from the German entgegen meaning "opposite"); if they are on the same side, it is labeled Z (from German zusammen, "together"). This labeling may be taught with mnemonic "Z means 'on ze zame zide'".
Groups containing C=C double bonds
IUPAC recognizes two names for hydrocarbon groups containing carbon–carbon double bonds, the vinyl group and the allyl group.
See also
Alpha-olefin
Annulene
Aromatic hydrocarbon ("Arene")
Dendralene
Nitroalkene
Radialene
Nomenclature links
Rule A-3. Unsaturated Compounds and Univalent Radicals IUPAC Blue Book.
Rule A-4. Bivalent and Multivalent Radicals IUPAC Blue Book.
Rules A-11.3, A-11.4, A-11.5 Unsaturated monocyclic hydrocarbons and substituents IUPAC Blue Book.
Rule A-23. Hydrogenated Compounds of Fused Polycyclic Hydrocarbons IUPAC Blue Book.
References
|
https://en.wikipedia.org/wiki/Allenes
|
In organic chemistry, allenes are organic compounds in which one carbon atom has double bonds with each of its two adjacent carbon atoms (, where R is H or some organyl group). Allenes are classified as cumulated dienes. The parent compound of this class is propadiene (), which is itself also called allene. An group of the structure is called allenyl, where R is H or some alkyl group. Compounds with an allene-type structure but with more than three carbon atoms are members of a larger class of compounds called cumulenes with bonding.
History
For many years, allenes were viewed as curiosities but thought to be synthetically useless and difficult to prepare and to work with. Reportedly, the first synthesis of an allene, glutinic acid, was performed in an attempt to prove the non-existence of this class of compounds. The situation began to change in the 1950s, and more than 300 papers on allenes have been published in 2012 alone. These compounds are not just interesting intermediates but synthetically valuable targets themselves; for example, over 150 natural products are known with an allene or cumulene fragment.
Structure and properties
Geometry
The central carbon atom of allenes forms two sigma bonds and two pi bonds. The central carbon atom is sp-hybridized, and the two terminal carbon atoms are sp2-hybridized. The bond angle formed by the three carbon atoms is 180°, indicating linear geometry for the central carbon atom. The two terminal carbon atoms are planar, and these planes are twisted 90° from each other. The structure can also be viewed as an "extended tetrahedral" with a similar shape to methane, an analogy that is continued into the stereochemical analysis of certain derivative molecules.
Symmetry
The symmetry and isomerism of allenes has long fascinated organic chemists. For allenes with four identical substituents, there exist two twofold axes of rotation through the central carbon atom, inclined at 45° to the CH2 planes at either end of the molecule. The molecule can thus be thought of as a two-bladed propeller. A third twofold axis of rotation passes through the C=C=C bonds, and there is a mirror plane passing through both CH2 planes. Thus this class of molecules belong to the D2d point group. Because of the symmetry, an unsubstituted allene has no net dipole moment, that is, it is a non-polar molecule.
An allene with two different substituents on each of the two carbon atoms will be chiral because there will no longer be any mirror planes. The chirality of these types of allenes was first predicted in 1875 by Jacobus Henricus van 't Hoff, but not proven experimentally until 1935. Where A has a greater priority than B according to the Cahn–Ingold–Prelog priority rules, the configuration of the axial chirality can be determined by considering the substituents on the front atom followed by the back atom when viewed along the allene axis. For the back atom, only the group of higher priority need be considered.
Chiral allenes have been recently used as building blocks in the construction of organic materials with exceptional chiroptical properties. There are a few examples of drug molecule having an allene system in their structure. Mycomycin, an antibiotic with tuberculostatic properties, is a typical example. This drug exhibits enantiomerism due to the presence of a suitably substituted allene system.
Although the semi-localized textbook σ-π separation model describes the bonding of allene using a pair of localized orthogonal π orbitals, the full molecular orbital description of the bonding is more subtle. The symmetry-correct doubly-degenerate HOMOs of allene (adapted to the D2d point group) can either be represented by a pair of orthogonal MOs or as twisted helical linear combinations of these orthogonal MOs. The symmetry of the system and the degeneracy of these orbitals imply that both descriptions are correct (in the same way that there are infinitely many ways to depict the doubly-degenerate HOMOs and LUMOs of benzene that correspond to different choices of eigenfunctions in a two-dimensional eigenspace). However, this degeneracy is lifted in substituted allenes, and the helical picture becomes the only symmetry-correct description for the HOMO and HOMO–1 of the C2-symmetric 1,3-dimethylallene. This qualitative MO description extends to higher odd-carbon cumulenes (e.g., 1,2,3,4-pentatetraene).
Chemical and spectral properties
Allenes differ considerably from other alkenes in terms of their chemical properties. Compared to isolated and conjugated dienes, they are considerably less stable: comparing the isomeric pentadienes, the allenic 1,2-pentadiene has a heat of formation of 33.6 kcal/mol, compared to 18.1 kcal/mol for (E)-1,3-pentadiene and 25.4 kcal/mol for the isolated 1,4-pentadiene.
The C–H bonds of allenes are considerably weaker and more acidic compared to typical vinylic C–H bonds: the bond dissociation energy is 87.7 kcal/mol (compared to 111 kcal/mol in ethylene), while the gas-phase acidity is 381 kcal/mol (compared to 409 kcal/mol for ethylene), making it slightly more acidic than the propargylic C–H bond of propyne (382 kcal/mol).
The 13C NMR spectrum of allenes is characterized by the signal of the sp-hybridized carbon atom, resonating at a characteristic 200-220 ppm. In contrast, the sp2-hybridized carbon atoms resonate around 80 ppm in a region typical for alkyne and nitrile carbon atoms, while the protons of a CH2 group of a terminal allene resonate at around 4.5 ppm — somewhat upfield of a typical vinylic proton.
Allenes possess a rich cycloaddition chemistry, including both [4+2] and [2+2] modes of addition, as well as undergoing formal cycloaddition processes catalyzed by transition metals. Allenes also serve as substrates for transition metal catalyzed hydrofunctionalization reactions.
Synthesis
Although allenes often require specialized syntheses, the parent allene, propadiene is produced industrially on a large scale as an equilibrium mixture with methylacetylene:
H2C=C=CH2 <=> H3C-C#CH
This mixture, known as MAPP gas, is commercially available. At 298 K, the ΔG° of this reaction is –1.9 kcal/mol, corresponding to Keq = 24.7.
The first allene to be synthesized was penta-2,3-dienedioic acid, which was prepared by Burton and Pechmann in 1887. However, the structure was only correctly identified in 1954.
Laboratory methods for the formation of allenes include:
from geminal dihalocyclopropanes and organolithium compounds (or metallic sodium or magnesium) in the Skattebøl rearrangement (Doering–LaFlamme allene synthesis) via rearrangement of cyclopropylidene carbenes/carbenoids
from reaction of certain terminal alkynes with formaldehyde, copper(I) bromide, and added base (Crabbé–Ma allene synthesis)
from propargylic halides by SN2′ displacement by an organocuprate
from dehydrohalogenation of certain dihalides
from reaction of a triphenylphosphinyl ester with an acid halide, a Wittig reaction accompanied by dehydrohalogenation
from propargylic alcohols via the Myers allene synthesis protocol—a stereospecific process
from metalation of allene or substituted allenes with BuLi and reaction with electrophiles (RX, R3SiX, D2O, etc.)
The chemistry of allenes has been reviewed in a number of books and journal articles. Some key approaches towards allenes are outlined in the following scheme:
One of the older methods is the Skattebøl rearrangement (also called the Doering–Moore–Skattebøl or Doering–LaFlamme rearrangement), in which a gem-dihalocyclopropane 3 is treated with an organolithium compound (or dissolving metal) and the presumed intermediate rearranges into an allene either directly or via carbene-like species. Notably, even strained allenes can be generated by this procedure. Modifications involving leaving groups of different nature are also known. Arguably, the most convenient modern method of allene synthesis is by sigmatropic rearrangement of propargylic substrates. Johnson–Claisen and Ireland–Claisen rearrangements of ketene acetals 4 have been used a number of times to prepare allenic esters and acids. Reactions of vinyl ethers 5 (the Saucy–Marbet rearrangement) give allene aldehydes, while propargylic sulfenates 6 give allene sulfoxides. Allenes can also be prepared by nucleophilic substitution in 9 and 10 (nucleophile Nu− can be a hydride anion), 1,2-elimination from 8, proton transfer in 7, and other, less general, methods.
Use and occurrence
The dominant use of allenes is allene itself, which, in equilibrium with propyne is a component of MAP gas.
Research
The reactivity of substituted allenes has been well explored.
The two π-bonds are located at the 90° angle to each other, and thus require a reagent to approach from somewhat different directions. With an appropriate substitution pattern, allenes exhibit axial chirality as predicted by van’t Hoff as early as 1875. Protonation of allenes gives cations 11 that undergo further transformations. Reactions with soft electrophiles (e.g. Br+) deliver positively charged onium ions 13. Transition-metal-catalysed reactions proceed via allylic intermediates 15 and have attracted significant interest in recent years. Numerous cycloadditions are also known, including [4+2]-, (2+1)-, and [2+2]-variants, which deliver, e.g., 12, 14, and 16, respectively.
Occurrence
Numerous natural products contain the allene functional group. Noteworthy are the pigments fucoxanthin and peridinin. Little is known about the biosynthesis, although it is conjectured that they are often generated from alkyne precursors.
Allenes serve as ligands in organometallic chemistry. A typical complex is Pt(η2-allene)(PPh3)2. Ni(0) reagents catalyze the cyclooligomerization of allene. Using a suitable catalyst (e.g. Wilkinson's catalyst), it is possible to reduce just one of the double bonds of an allene.
Delta convention
Many rings or ring systems are known by semisystematic names that assume a maximum number of noncumulative bonds. To unambiguously specify derivatives that include cumulated bonds (and hence fewer hydrogen atoms than would be expected from the skeleton), a lowercase delta may be used with a subscript indicating the number of cumulated double bonds from that atom, e.g. 8δ2-benzocyclononene. This may be combined with the λ-convention for specifying nonstandard valency states, e.g. 2λ4δ2,5λ4δ2-thieno[3,4-c]thiophene.
See also
Compounds with three or more adjacent carbon–carbon double bonds are called cumulenes.
References
Further reading
Brummond, Kay M. (editor). Allene chemistry (special thematic issue). Beilstein Journal of Organic Chemistry 7: 394–943.
External links
Synthesis of allenes
Jacobus Henricus van 't Hoff
|
https://en.wikipedia.org/wiki/Astrobiology
|
Astrobiology is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth.
Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth.
The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline.
Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications.
The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions.
Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications.
Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research.
Overview
The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin.
While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory.
The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive.
In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field.
The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars.
Theoretical foundations
Planetary habitability
Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability.
Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds.
Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry.
Environmental Stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. (See also: Habitability of red dwarf systems).
Energy source: It is assumed that any life elsewhere in the universe would also require an energy source. Previously, it was assumed that this would necessarily be from a sun-like star, however with developments within extremophile research contemporary astrobiological research often focuses on identifying environments that have the potential to support life based on the availability of an energy source, such as the presence of volcanic activity on a planet or moon that could provide a source of heat and energy.
It is important to note that these assumptions are based on our current understanding of life on Earth and the conditions under which it can exist. As our understanding of life and the potential for it to exist in different environments evolves, these assumptions may change.
Methodology
Astrobiological research concerning the study of habitable environments in our solar system and beyond utilises methodologies within the geosciences. Research within this branch primarily concerns the geobiology of organisms that can survive in extreme environments on Earth, such as in volcanic or deep sea environments, to understand the limits of life, and the conditions under which life might be able to survive on other planets. This includes, but is not limited to;
Deep-sea extremophiles: Researchers are studying organisms that live in the extreme environments of deep-sea hydrothermal vents and cold seeps. These organisms survive in the absence of sunlight, and some are able to survive in high temperatures and pressures, and use chemical energy instead of sunlight to produce food.
Desert extremophiles: Researchers are studying organisms that can survive in extreme dry, high temperature conditions, such as in deserts.
Microbes in extreme environments: Researchers are investigating the diversity and activity of microorganisms in environments such as deep mines, subsurface soil, cold glaciers and polar ice, and high-altitude environments.
Research also regards the long-term survival of life on Earth, and the possibilities and hazards of life on other planets, including;
Biodiversity and ecosystem resilience: Scientists are studying how the diversity of life and the interactions between different species contribute to the resilience of ecosystems and their ability to recover from disturbances.
Climate change and extinction: Researchers are investigating the impacts of climate change on different species and ecosystems, and how they may lead to extinction or adaptation. This includes the evolution of Earth's climate and geology, and their potential impact on the habitability of the planet in the future, especially for humans.
Human impact on the biosphere: Scientists are studying the ways in which human activities, such as deforestation, pollution, and the introduction of invasive species, are affecting the biosphere and the long-term survival of life on Earth.
Long-term preservation of life: Researchers are exploring ways to preserve samples of life on Earth for long periods of time, such as cryopreservation and genomic preservation, in the event of a catastrophic event that could wipe out most of life on Earth.
Emerging astrobiological research concerning the search for planetary biosignatures of past or present extraterrestrial life utilise methodologies within planetary sciences. These include;
The study of microbial life in the subsurface of Mars: Scientists are using data from Mars rover missions to study the composition of the subsurface of Mars, searching for biosignatures of past or present microbial life.
The study of subsurface oceans on icy moons: Recent discoveries of subsurface oceans on moons such as Europa and Enceladus have opened up new habitability zones thus targets for the search for extraterrestrial life. Currently, missions like the Europa Clipper are being planned to search for biosignatures within these environments.
The study of the atmospheres of planets: Scientists are studying the potential for life to exist in the atmospheres of planets, with a focus on the study of the physical and chemical conditions necessary for such life to exist, namely the detection of organic molecules and biosignature gases; for example, the study of the possibility of life in the atmospheres of exoplanets that orbit red dwarfs and the study of the potential for microbial life in the upper atmosphere of Venus.
Telescopes and remote sensing of exoplanets: The discovery of thousands of exoplanets has opened up new opportunities for the search for biosignatures. Scientists are using telescopes such as the James Webb Space Telescope and the Transiting Exoplanet Survey Satellite to search for biosignatures on exoplanets. They are also developing new techniques for the detection of biosignatures, such as the use of remote sensing to search for biosignatures in the atmosphere of exoplanets.
SETI and CETI: Scientists search for signals from intelligent extraterrestrial civilizations using radio and optical telescopes within the discipline of extraterrestrial intelligence communications (CETI). CETI focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. Communication attempts by humans have included broadcasting mathematical languages, pictorial systems such as the Arecibo message, and computational approaches to detecting and deciphering 'natural' language communication. While some high-profile scientists, such as Carl Sagan, have advocated the transmission of messages, theoretical physicist Stephen Hawking warned against it, suggesting that aliens may raid Earth for its resources.
Emerging astrobiological research concerning the study of the origin and early evolution of life on Earth utilises methodologies within the palaeosciences. These include;
The study of the early atmosphere: Researchers are investigating the role of the early atmosphere in providing the right conditions for the emergence of life, such as the presence of gases that could have helped to stabilise the climate and the formation of organic molecules.
The study of the early magnetic field: Researchers are investigating the role of the early magnetic field in protecting the Earth from harmful radiation and helping to stabilise the climate. This research has immense astrobiological implications where the subjects of current astrobiological research like Mars lack such a field.
The study of prebiotic chemistry: Scientists are studying the chemical reactions that could have occurred on the early Earth that led to the formation of the building blocks of life- amino acids, nucleotides, and lipids- and how these molecules could have formed spontaneously under early Earth conditions.
The study of impact events: Scientists are investigating the potential role of impact events- especially meteorites- in the delivery of water and organic molecules to early Earth.
The study of the primordial soup: Researchers are investigating the conditions and ingredients that were present on the early Earth that could have led to the formation of the first living organisms, such as the presence of water and organic molecules, and how these ingredients could have led to the formation of the first living organisms. This includes the role of water in the formation of the first cells and in catalysing chemical reactions.
The study of the role of minerals: Scientists are investigating the role of minerals like clay in catalysing the formation of organic molecules, thus playing a role in the emergence of life on Earth.
The study of the role of energy and electricity: Scientists are investigating the potential sources of energy and electricity that could have been available on the early Earth, and their role in the formation of organic molecules, thus the emergence of life.
The study of the early oceans: Scientists are investigating the composition and chemistry of the early oceans and how it may have played a role in the emergence of life, such as the presence of dissolved minerals that could have helped to catalyse the formation of organic molecules.
The study of hydrothermal vents: Scientists are investigating the potential role of hydrothermal vents in the origin of life, as these environments may have provided the energy and chemical building blocks needed for its emergence.
The study of plate tectonics: Scientists are investigating the role of plate tectonics in creating a diverse range of environments on the early Earth.
The study of the early biosphere: Researchers are investigating the diversity and activity of microorganisms in the early Earth, and how these organisms may have played a role in the emergence of life.
The study of microbial fossils: Scientists are investigating the presence of microbial fossils in ancient rocks, which can provide clues about the early evolution of life on Earth and the emergence of the first organisms.
Research
The systematic search for possible life outside Earth is a valid multidisciplinary scientific endeavor. However, hypotheses and predictions as to its existence and origin vary widely, and at the present, the development of hypotheses firmly grounded on science may be considered astrobiology's most concrete practical application. It has been proposed that viruses are likely to be encountered on other life-bearing planets, and may be present even if there are no biological cells.
Research outcomes
, no evidence of extraterrestrial life has been identified. Examination of the Allan Hills 84001 meteorite, which was recovered in Antarctica in 1984 and originated from Mars, is thought by David McKay, as well as few other scientists, to contain microfossils of extraterrestrial origin; this interpretation is controversial.
Yamato 000593, the second largest meteorite from Mars, was found on Earth in 2000. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to some NASA scientists.
On 5 March 2011, Richard B. Hoover, a scientist with the Marshall Space Flight Center, speculated on the finding of alleged microfossils similar to cyanobacteria in CI1 carbonaceous meteorites in the fringe Journal of Cosmology, a story widely reported on by mainstream media. However, NASA formally distanced itself from Hoover's claim. According to American astrophysicist Neil deGrasse Tyson: "At the moment, life on Earth is the only known life in the universe, but there are compelling arguments to suggest we are not alone."
Elements of astrobiology
Astronomy
Most astronomy-related astrobiology research falls into the category of extrasolar planet (exoplanet) detection, the hypothesis being that if life arose on Earth, then it could also arise on other planets with similar characteristics. To that end, a number of instruments designed to detect Earth-sized exoplanets have been considered, most notably NASA's Terrestrial Planet Finder (TPF) and ESA's Darwin programs, both of which have been cancelled. NASA launched the Kepler mission in March 2009, and the French Space Agency launched the COROT space mission in 2006. There are also several less ambitious ground-based efforts underway.
The goal of these missions is not only to detect Earth-sized planets but also to directly detect light from the planet so that it may be studied spectroscopically. By examining planetary spectra, it would be possible to determine the basic composition of an extrasolar planet's atmosphere and/or surface. Given this knowledge, it may be possible to assess the likelihood of life being found on that planet. A NASA research group, the Virtual Planet Laboratory, is using computer modeling to generate a wide variety of virtual planets to see what they would look like if viewed by TPF or Darwin. It is hoped that once these missions come online, their spectra can be cross-checked with these virtual planetary spectra for features that might indicate the presence of life.
An estimate for the number of planets with intelligent communicative extraterrestrial life can be gleaned from the Drake equation, essentially an equation expressing the probability of intelligent life as the product of factors such as the fraction of planets that might be habitable and the fraction of planets on which life might arise:
where:
N = The number of communicative civilizations
R* = The rate of formation of suitable stars (stars such as the Sun)
fp = The fraction of those stars with planets (current evidence indicates that planetary systems may be common for stars like the Sun)
ne = The number of Earth-sized worlds per planetary system
fl = The fraction of those Earth-sized planets where life actually develops
fi = The fraction of life sites where intelligence develops
fc = The fraction of communicative planets (those on which electromagnetic communications technology develops)
L = The "lifetime" of communicating civilizations
However, whilst the rationale behind the equation is sound, it is unlikely that the equation will be constrained to reasonable limits of error any time soon. The problem with the formula is that it is not used to generate or support hypotheses because it contains factors that can never be verified. The first term, R*, number of stars, is generally constrained within a few orders of magnitude. The second and third terms, fp, stars with planets and fe, planets with habitable conditions, are being evaluated for the star's neighborhood. Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference, but some applications of the formula had been taken literally and related to simplistic or pseudoscientific arguments. Another associated topic is the Fermi paradox, which suggests that if intelligent life is common in the universe, then there should be obvious signs of it.
Another active research area in astrobiology is planetary system formation. It has been suggested that the peculiarities of the Solar System (for example, the presence of Jupiter as a protective shield) may have greatly increased the probability of intelligent life arising on Earth.
Biology
Biology cannot state that a process or phenomenon, by being mathematically possible, has to exist forcibly in an extraterrestrial body. Biologists specify what is speculative and what is not. The discovery of extremophiles, organisms able to survive in extreme environments, became a core research element for astrobiologists, as they are important to understand four areas in the limits of life in planetary context: the potential for panspermia, forward contamination due to human exploration ventures, planetary colonization by humans, and the exploration of extinct and extant extraterrestrial life.
Until the 1970s, life was thought to be entirely dependent on energy from the Sun. Plants on Earth's surface capture energy from sunlight to photosynthesize sugars from carbon dioxide and water, releasing oxygen in the process that is then consumed by oxygen-respiring organisms, passing their energy up the food chain. Even life in the ocean depths, where sunlight cannot reach, was thought to obtain its nourishment either from consuming organic detritus rained down from the surface waters or from eating animals that did. The world's ability to support life was thought to depend on its access to sunlight. However, in 1977, during an exploratory dive to the Galapagos Rift in the deep-sea exploration submersible Alvin, scientists discovered colonies of giant tube worms, clams, crustaceans, mussels, and other assorted creatures clustered around undersea volcanic features known as black smokers. These creatures thrive despite having no access to sunlight, and it was soon discovered that they comprise an entirely independent ecosystem. Although most of these multicellular lifeforms need dissolved oxygen (produced by oxygenic photosynthesis) for their aerobic cellular respiration and thus are not completely independent from sunlight by themselves, the basis for their food chain is a form of bacterium that derives its energy from oxidization of reactive chemicals, such as hydrogen or hydrogen sulfide, that bubble up from the Earth's interior. Other lifeforms entirely decoupled from the energy from sunlight are green sulfur bacteria which are capturing geothermal light for anoxygenic photosynthesis or bacteria running chemolithoautotrophy based on the radioactive decay of uranium. This chemosynthesis revolutionized the study of biology and astrobiology by revealing that life need not be sunlight-dependent; it only requires water and an energy gradient in order to exist.
Biologists have found extremophiles that thrive in ice, boiling water, acid, alkali, the water core of nuclear reactors, salt crystals, toxic waste and in a range of other extreme habitats that were previously thought to be inhospitable for life. This opened up a new avenue in astrobiology by massively expanding the number of possible extraterrestrial habitats. Characterization of these organisms, their environments and their evolutionary pathways, is considered a crucial component to understanding how life might evolve elsewhere in the universe. For example, some organisms able to withstand exposure to the vacuum and radiation of outer space include the lichen fungi Rhizocarpon geographicum and Xanthoria elegans, the bacterium Bacillus safensis, Deinococcus radiodurans, Bacillus subtilis, yeast Saccharomyces cerevisiae, seeds from Arabidopsis thaliana ('mouse-ear cress'), as well as the invertebrate animal Tardigrade. While tardigrades are not considered true extremophiles, they are considered extremotolerant microorganisms that have contributed to the field of astrobiology. Their extreme radiation tolerance and presence of DNA protection proteins may provide answers as to whether life can survive away from the protection of the Earth's atmosphere.
Jupiter's moon, Europa, and Saturn's moon, Enceladus, are now considered the most likely locations for extant extraterrestrial life in the Solar System due to their subsurface water oceans where radiogenic and tidal heating enables liquid water to exist.
The origin of life, known as abiogenesis, distinct from the evolution of life, is another ongoing field of research. Oparin and Haldane postulated that the conditions on the early Earth were conducive to the formation of organic compounds from inorganic elements and thus to the formation of many of the chemicals common to all forms of life we see today. The study of this process, known as prebiotic chemistry, has made some progress, but it is still unclear whether or not life could have formed in such a manner on Earth. The alternative hypothesis of panspermia is that the first elements of life may have formed on another planet with even more favorable conditions (or even in interstellar space, asteroids, etc.) and then have been carried over to Earth.
The cosmic dust permeating the universe contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. Further, a scientist suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life."
More than 20% of the carbon in the universe may be associated with polycyclic aromatic hydrocarbons (PAHs), possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. PAHs are subjected to interstellar medium conditions and are transformed through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively".
In October 2020, astronomers proposed the idea of detecting life on distant planets by studying the shadows of trees at certain times of the day to find patterns that could be detected through observation of exoplanets.
Rare Earth hypothesis
The Rare Earth hypothesis postulates that multicellular life forms found on Earth may actually be more of a rarity than scientists assume. According to this hypothesis, life on Earth (and more, multi-cellular life) is possible because of a conjunction of the right circumstances (galaxy and location within it, planetary system, star, orbit, planetary size, atmosphere, etc.); and the chance for all those circumstances to repeat elsewhere may be rare. It provides a possible answer to the Fermi paradox which suggests, "If extraterrestrial aliens are common, why aren't they obvious?" It is apparently in opposition to the principle of mediocrity, assumed by famed astronomers Frank Drake, Carl Sagan, and others. The principle of mediocrity suggests that life on Earth is not exceptional, and it is more than likely to be found on innumerable other worlds.
Missions
Research into the environmental limits of life and the workings of extreme ecosystems is ongoing, enabling researchers to better predict what planetary environments might be most likely to harbor life. Missions such as the Phoenix lander, Mars Science Laboratory, ExoMars, Mars 2020 rover to Mars, and the Cassini probe to Saturn's moons aim to further explore the possibilities of life on other planets in the Solar System.
Viking program
The two Viking landers each carried four types of biological experiments to the surface of Mars in the late 1970s. These were the only Mars landers to carry out experiments looking specifically for metabolism by current microbial life on Mars. The landers used a robotic arm to collect soil samples into sealed test containers on the craft. The two landers were identical, so the same tests were carried out at two places on Mars' surface; Viking 1 near the equator and Viking 2 further north. The result was inconclusive, and is still disputed by some scientists.
Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life.
Beagle 2
Beagle 2 was an unsuccessful British Mars lander that formed part of the European Space Agency's 2003 Mars Express mission. Its primary purpose was to search for signs of life on Mars, past or present. Although it landed safely, it was unable to correctly deploy its solar panels and telecom antenna.
EXPOSE
EXPOSE is a multi-user facility mounted in 2008 outside the International Space Station dedicated to astrobiology. EXPOSE was developed by the European Space Agency (ESA) for long-term spaceflights that allow exposure of organic chemicals and biological samples to outer space in low Earth orbit.
Mars Science Laboratory
The Mars Science Laboratory (MSL) mission landed the Curiosity rover that is currently in operation on Mars. It was launched 26 November 2011, and landed at Gale Crater on 6 August 2012. Mission objectives are to help assess Mars' habitability and in doing so, determine whether Mars is or has ever been able to support life, collect data for a future human mission, study Martian geology, its climate, and further assess the role that water, an essential ingredient for life as we know it, played in forming minerals on Mars.
Tanpopo
The Tanpopo mission is an orbital astrobiology experiment investigating the potential interplanetary transfer of life, organic compounds, and possible terrestrial particles in the low Earth orbit. The purpose is to assess the panspermia hypothesis and the possibility of natural interplanetary transport of microbial life as well as prebiotic organic compounds. Early mission results show evidence that some clumps of microorganism can survive for at least one year in space. This may support the idea that clumps greater than 0.5 millimeters of microorganisms could be one way for life to spread from planet to planet.
ExoMars rover
ExoMars is a robotic mission to Mars to search for possible biosignatures of Martian life, past or present. This astrobiological mission is currently under development by the European Space Agency (ESA) in partnership with the Russian Federal Space Agency (Roscosmos); it is planned for a 2022 launch.
Mars 2020
Mars 2020 successfully landed its rover Perseverance in Jezero Crater on 18 February 2021. It will investigate environments on Mars relevant to astrobiology, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures and biomolecules within accessible geological materials. The Science Definition Team is proposing the rover collect and package at least 31 samples of rock cores and soil for a later mission to bring back for more definitive analysis in laboratories on Earth. The rover could make measurements and technology demonstrations to help designers of a human expedition understand any hazards posed by Martian dust and demonstrate how to collect carbon dioxide (CO2), which could be a resource for making molecular oxygen (O2) and rocket fuel.
Europa Clipper
Europa Clipper is a mission planned by NASA for a 2025 launch that will conduct detailed reconnaissance of Jupiter's moon Europa and will investigate whether its internal ocean could harbor conditions suitable for life. It will also aid in the selection of future landing sites.
Dragonfly
Dragonfly is a NASA mission scheduled to land on Titan in 2036 to assess its microbial habitability and study its prebiotic chemistry. Dragonfly is a rotorcraft lander that will perform controlled flights between multiple locations on the surface, which allows sampling of diverse regions and geological contexts.
Proposed concepts
Icebreaker Life
Icebreaker Life is a lander mission that was proposed for NASA's Discovery Program for the 2021 launch opportunity, but it was not selected for development. It would have had a stationary lander that would be a near copy of the successful 2008 Phoenix and it would have carried an upgraded astrobiology scientific payload, including a 1-meter-long core drill to sample ice-cemented ground in the northern plains to conduct a search for organic molecules and evidence of current or past life on Mars. One of the key goals of the Icebreaker Life mission is to test the hypothesis that the ice-rich ground in the polar regions has significant concentrations of organics due to protection by the ice from oxidants and radiation.
Journey to Enceladus and Titan
Journey to Enceladus and Titan (JET) is an astrobiology mission concept to assess the habitability potential of Saturn's moons Enceladus and Titan by means of an orbiter.
Enceladus Life Finder
Enceladus Life Finder (ELF) is a proposed astrobiology mission concept for a space probe intended to assess the habitability of the internal aquatic ocean of Enceladus, Saturn's sixth-largest moon.
Life Investigation For Enceladus
Life Investigation For Enceladus (LIFE) is a proposed astrobiology sample-return mission concept. The spacecraft would enter into Saturn orbit and enable multiple flybys through Enceladus' icy plumes to collect icy plume particles and volatiles and return them to Earth on a capsule. The spacecraft may sample Enceladus' plumes, the E ring of Saturn, and the upper atmosphere of Titan.
Oceanus
Oceanus is an orbiter proposed in 2017 for the New Frontiers mission No. 4. It would travel to the moon of Saturn, Titan, to assess its habitability. Oceanus objectives are to reveal Titan's organic chemistry, geology, gravity, topography, collect 3D reconnaissance data, catalog the organics and determine where they may interact with liquid water.
Explorer of Enceladus and Titan
Explorer of Enceladus and Titan (E2T) is an orbiter mission concept that would investigate the evolution and habitability of the Saturnian satellites Enceladus and Titan. The mission concept was proposed in 2017 by the European Space Agency.
See also
The Living Cosmos
References
Bibliography
The International Journal of Astrobiology, published by Cambridge University Press, is the forum for practitioners in this interdisciplinary field.
Astrobiology, published by Mary Ann Liebert, Inc., is a peer-reviewed journal that explores the origins of life, evolution, distribution, and destiny in the universe.
Loeb, Avi (2021). Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Houghton Mifflin Harcourt.
Further reading
D. Goldsmith, T. Owen, The Search For Life in the Universe, Addison-Wesley Publishing Company, 2001 (3rd edition).
Andy Weir's 2021 novel, Project Hail Mary, centers on astrobiology.
External links
Astrobiology.nasa.gov
UK Centre for Astrobiology
Spanish Centro de Astrobiología
Astrobiology Research at The Library of Congress
Astrobiology Survey – An introductory course on astrobiology
Summary - Search For Life Beyond Earth (NASA; 25 June 2021)
Extraterrestrial life
Origin of life
Astronomical sub-disciplines
Branches of biology
Speculative evolution
|
https://en.wikipedia.org/wiki/Aerodynamics
|
Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature.
History
Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes.
In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes.
In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903.
During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers.
As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft.
By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations.
Fundamental concepts
Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields.
Flow classification
Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow.
Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results.
Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine).
Continuum assumption
Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow.
The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics.
Conservation laws
The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used:
Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation.
Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components).
Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest.
Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations.
The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables.
Branches of aerodynamics
Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe.
Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic.
The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows.
Incompressible aerodynamics
An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included.
Subsonic flow
Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions.
In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics.
Compressible aerodynamics
According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows.
Transonic flow
The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic.
Supersonic flow
Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem.
Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes.
Hypersonic flow
In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas.
Associated terminology
The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence.
Boundary layers
The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically.
Turbulence
In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow.
Aerodynamics in other fields
Engineering design
Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines.
The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine.
Environmental design
Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems.
Aerodynamic equations are used in numerical weather prediction.
Ball-control in sports
Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect".
See also
Aeronautics
Aerostatics
Aviation
Insect flight – how bugs fly
List of aerospace engineering topics
List of engineering topics
Nose cone design
Fluid dynamics
Computational fluid dynamics
References
Further reading
General aerodynamics
Subsonic aerodynamics
Obert, Ed (2009). . Delft; About practical aerodynamics in industry and the effects on design of aircraft. .
Transonic aerodynamics
Supersonic aerodynamics
Hypersonic aerodynamics
History of aerodynamics
Aerodynamics related to engineering
Ground vehicles
Fixed-wing aircraft
Helicopters
Missiles
Model aircraft
Related branches of aerodynamics
Aerothermodynamics
Aeroelasticity
Boundary layers
Turbulence
External links
NASA Beginner's Guide to Aerodynamics
Aerodynamics for Students
Aerodynamics for Pilots
Aerodynamics and Race Car Tuning
Aerodynamic Related Projects
eFluids Bicycle Aerodynamics
Application of Aerodynamics in Formula One (F1)
Aerodynamics in Car Racing
Aerodynamics of Birds
NASA Aerodynamics Index
Dynamics
Energy in transport
|
https://en.wikipedia.org/wiki/Ash
|
Ash or ashes are the solid remnants of fires. Specifically, ash refers to all non-aqueous, non-gaseous residues that remain after something burns. In analytical chemistry, to analyse the mineral and metal content of chemical samples, ash is the non-gaseous, non-liquid residue after complete combustion.
Ashes as the end product of incomplete combustion are mostly mineral, but usually still contain an amount of combustible organic or other oxidizable residues. The best-known type of ash is wood ash, as a product of wood combustion in campfires, fireplaces, etc. The darker the wood ashes, the higher the content of remaining charcoal from incomplete combustion. The ashes are of different types. Some ashes contain natural compounds that make soil fertile. Others have chemical compounds that can be toxic but may break up in soil from chemical changes and microorganism activity.
Like soap, ash is also a disinfecting agent (alkaline). The World Health Organization recommends ash or sand as alternative for handwashing when soap is not available.
Natural occurrence
Ash occurs naturally from any fire that burns vegetation, and may disperse in the soil to fertilise it, or clump under it for long enough to carbonise into coal.
Specific types
Wood ash
Products of coal combustion
Bottom ash
Fly ash
Cigarette or cigar ash
Incinerator bottom ash, a form of ash produced in incinerators
Volcanic ash, ash that consists of fragmented glass, rock, and minerals that appears during an eruption.
Cremation ashes
Cremation ashes, also called cremated remains or "cremains," are the bodily remains left from cremation. They often take the form of a grey powder resembling coarse sand. While often referred to as ashes, the remains primarily consist of powdered bone fragments due to the cremation process, which eliminates the body's organic materials. People often store these ashes in containers like urns, although they are also sometimes buried or scattered in specific locations.
See also
Ash (analytical chemistry)
Cinereous, consisting of ashes, ash-colored or ash-like
Potash, a term for many useful potassium salts that traditionally derived from plant ashes, but today are typically mined from underground deposits
coal, consisting of carbon as ash, and ash can be converted into coal
carbon, basic component of ashes
charcoal, carbon residue after heating wood mainly used as traditional fuel
References
Combustion
|
https://en.wikipedia.org/wiki/Antiderivative
|
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and .
Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.
In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference.
Examples
The function is an antiderivative of , since the derivative of is . And since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. Essentially, the graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value .
More generally, the power function has antiderivative if , and if .
In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). Thus, integration produces the relations of acceleration, velocity and displacement:
Uses and properties
Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if is an antiderivative of the integrable function over the interval , then:
Because of this, each of the infinitely many antiderivatives of a given function may be called the "indefinite integral" of f and written using the integral symbol with no bounds:
If is an antiderivative of , and the function is defined on some interval, then every other antiderivative of differs from by a constant: there exists a number such that for all . is called the constant of integration. If the domain of is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance
is the most general antiderivative of on its natural domain
Every continuous function has an antiderivative, and one antiderivative is given by the definite integral of with variable upper boundary:
for any in the domain of . Varying the lower boundary produces other antiderivatives, but not necessarily all possible antiderivatives. This is another formulation of the fundamental theorem of calculus.
There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions (like polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations). Examples of these are
the error function
the Fresnel function
the sine integral
the logarithmic integral function and
sophomore's dream
For a more detailed discussion, see also Differential Galois theory.
Techniques of integration
Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives (indeed, there is no pre-defined method for computing indefinite integrals). For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. To learn more, see elementary functions and nonelementary integral.
There exist many properties and techniques for finding antiderivatives. These include, among others:
The linearity of integration (which breaks complicated integrals into simpler ones)
Integration by substitution, often combined with trigonometric identities or the natural logarithm
The inverse chain rule method (a special case of integration by substitution)
Integration by parts (to integrate products of functions)
Inverse function integration (a formula that expresses the antiderivative of the inverse of an invertible and continuous function , in terms of the antiderivative of and of ).
The method of partial fractions in integration (which allows us to integrate all rational functions—fractions of two polynomials)
The Risch algorithm
Additional techniques for multiple integrations (see for instance double integrals, polar coordinates, the Jacobian and the Stokes' theorem)
Numerical integration (a technique for approximating a definite integral when no elementary antiderivative exists, as in the case of )
Algebraic manipulation of integrand (so that other integration techniques, such as integration by substitution, may be used)
Cauchy formula for repeated integration (to calculate the -times antiderivative of a function)
Computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy. Integrals which have already been derived can be looked up in a table of integrals.
Of non-continuous functions
Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that:
Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives.
In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable.
Assuming that the domains of the functions are open intervals:
A necessary, but not sufficient, condition for a function to have an antiderivative is that have the intermediate value property. That is, if is a subinterval of the domain of and is any real number between and , then there exists a between and such that . This is a consequence of Darboux's theorem.
The set of discontinuities of must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function having an antiderivative, which has the given set as its set of discontinuities.
If has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the Henstock–Kurzweil integral, every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative.
If has an antiderivative on a closed interval , then for any choice of partition if one chooses sample points as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value . However if is unbounded, or if is bounded but the set of discontinuities of has positive Lebesgue measure, a different choice of sample points may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below.
Some examples
Basic formulae
If , then .
See also
Antiderivative (complex analysis)
Formal antiderivative
Jackson integral
Lists of integrals
Symbolic integration
Area
Notes
References
Further reading
Introduction to Classical Real Analysis, by Karl R. Stromberg; Wadsworth, 1981 (see also)
Historical Essay On Continuity Of Derivatives by Dave L. Renfro
External links
Wolfram Integrator — Free online symbolic integration with Mathematica
Function Calculator from WIMS
Integral at HyperPhysics
Antiderivatives and indefinite integrals at the Khan Academy
Integral calculator at Symbolab
The Antiderivative at MIT
Introduction to Integrals at SparkNotes
Antiderivatives at Harvy Mudd College
Integral calculus
Linear operators in calculus
|
https://en.wikipedia.org/wiki/AI-complete
|
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.
Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property could be useful, for example, to test for the presence of humans as CAPTCHAs aim to do, and for computer security to circumvent brute-force attacks.
History
The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. Early uses of the term are in Erik Mueller's 1987 PhD dissertation and in Eric Raymond's 1991 Jargon File.
AI-complete problems
AI-complete problems are hypothesized to include:
AI peer review (composite natural language understanding, automated reasoning, automated theorem proving, formalized logic expert system)
Bongard problems
Computer vision (and subproblems such as object recognition)
Natural language understanding (and subproblems such as text mining, machine translation, and word-sense disambiguation)
Autonomous driving
Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.
Machine translation
To translate accurately, a machine must be able to understand the text. It must be able to follow the author's argument, so it must have some ability to reason. It must have extensive world knowledge so that it knows what is being discussed — it must at least be familiar with all the same commonsense facts that the average human translator knows. Some of this knowledge is in the form of facts that can be explicitly represented, but some knowledge is unconscious and closely tied to the human body: for example, the machine may need to understand how an ocean makes one feel to accurately translate a specific metaphor in the text. It must also model the authors' goals, intentions, and emotional states to accurately reproduce them in a new language. In short, the machine is required to have wide variety of human intellectual skills, including reason, commonsense knowledge and the intuitions that underlie motion and manipulation, perception, and social intelligence. Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.
Software brittleness
Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real-world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on.
DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens."
Formalization
Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterised formally. Since many AI problems have no formalisation yet, conventional complexity theory does not allow the definition of AI-completeness.
To address this problem, a complexity theory for AI has been proposed. It is based on a model of computation that splits the computational burden between a computer and a human: one part is solved by computer and the other part solved by human. This is formalised by a human-assisted Turing machine. The formalisation defines algorithm complexity, problem complexity and reducibility which in turn allows equivalence classes to be defined.
The complexity of executing an algorithm with a human-assisted Turing machine is given by a pair , where the first element represents the complexity of the human's part and the second element is the complexity of the machine's part.
Results
The complexity of solving the following problems with a human-assisted Turing machine is:
Optical character recognition for printed text:
Turing test:
for an -sentence conversation where the oracle remembers the conversation history (persistent oracle):
for an -sentence conversation where the conversation history must be retransmitted:
for an -sentence conversation where the conversation history must be retransmitted and the person takes linear time to read the query:
ESP game:
Image labelling (based on the Arthur–Merlin protocol):
Image classification: human only: , and with less reliance on the human: .
See also
ASR-complete
List of unsolved problems in computer science
Synthetic intelligence
Practopoiesis
References
Artificial intelligence
Computational problems
|
https://en.wikipedia.org/wiki/Ammeter
|
An ammeter (abbreviation of Ampere meter) is an instrument used to measure the current in a circuit. Electric currents are measured in amperes (A), hence the name. For direct measurement, the ammeter is connected in series with the circuit in which the current is to be measured. An ammeter usually has low resistance so that it does not cause a significant voltage drop in the circuit being measured.
Instruments used to measure smaller currents, in the milliampere or microampere range, are designated as milliammeters or microammeters. Early ammeters were laboratory instruments that relied on the Earth's magnetic field for operation. By the late 19th century, improved instruments were designed which could be mounted in any position and allowed accurate measurements in electric power systems. It is generally represented by letter 'A' in a circuit.
History
The relation between electric current, magnetic fields and physical forces was first noted by Hans Christian Ørsted in 1820, who observed a compass needle was deflected from pointing North when a current flowed in an adjacent wire. The tangent galvanometer was used to measure currents using this effect, where the restoring force returning the pointer to the zero position was provided by the Earth's magnetic field. This made these instruments usable only when aligned with the Earth's field. Sensitivity of the instrument was increased by using additional turns of wire to multiply the effect – the instruments were called "multipliers".
The word rheoscope as a detector of electrical currents was coined by Sir Charles Wheatstone about 1840 but is no longer used to describe electrical instruments. The word makeup is similar to that of rheostat (also coined by Wheatstone) which was a device used to adjust the current in a circuit. Rheostat is a historical term for a variable resistance, though unlike rheoscope may still be encountered.
Types
Some instruments are panel meters, meant to be mounted on some sort of control panel. Of these, the flat, horizontal or vertical type is often called an edgewise meter.
Moving-coil
The D'Arsonval galvanometer is a moving coil ammeter. It uses magnetic deflection, where current passing through a coil placed in the magnetic field of a permanent magnet causes the coil to move. The modern form of this instrument was developed by Edward Weston, and uses two spiral springs to provide the restoring force. The uniform air gap between the iron core and the permanent magnet poles make the deflection of the meter linearly proportional to current. These meters have linear scales. Basic meter movements can have full-scale deflection for currents from about 25 microamperes to 10 milliamperes.
Because the magnetic field is polarised, the meter needle acts in opposite directions for each direction of current. A DC ammeter is thus sensitive to which polarity it is connected in; most are marked with a positive terminal, but some have centre-zero mechanisms
and can display currents in either direction. A moving coil meter indicates the average (mean) of a varying current through it,
which is zero for AC. For this reason, moving-coil meters are only usable directly for DC, not AC.
This type of meter movement is extremely common for both ammeters and other meters derived from them, such as voltmeters and ohmmeters.
Moving magnet
Moving magnet ammeters operate on essentially the same principle as moving coil, except that the coil is mounted in the meter case, and a permanent magnet moves the needle. Moving magnet Ammeters are able to carry larger currents than moving coil instruments, often several tens of Amperes, because the coil can be made of thicker wire and the current does not have to be carried by the hairsprings. Indeed, some Ammeters of this type do not have hairsprings at all, instead using a fixed permanent magnet to provide the restoring force.
Electrodynamic
An electrodynamic ammeter uses an electromagnet instead of the permanent magnet of the d'Arsonval movement. This instrument can respond to both alternating and direct current and also indicates true RMS for AC. See Wattmeter for an alternative use for this instrument.
Moving-iron
Moving iron ammeters use a piece of iron which moves when acted upon by the electromagnetic force of a fixed coil of wire. The moving-iron meter was invented by Austrian engineer Friedrich Drexler in 1884.
This type of meter responds to both direct and alternating currents (as opposed to the moving-coil ammeter, which works on direct current only). The iron element consists of a moving vane attached to a pointer, and a fixed vane, surrounded by a coil. As alternating or direct current flows through the coil and induces a magnetic field in both vanes, the vanes repel each other and the moving vane deflects against the restoring force provided by fine helical springs. The deflection of a moving iron meter is proportional to the square of the current. Consequently, such meters would normally have a nonlinear scale, but the iron parts are usually modified in shape to make the scale fairly linear over most of its range. Moving iron instruments indicate the RMS value of any AC waveform applied. Moving iron ammeters are commonly used to measure current in industrial frequency AC circuits.
Hot-wire
In a hot-wire ammeter, a current passes through a wire which expands as it heats. Although these instruments have slow response time and low accuracy, they were sometimes used in measuring radio-frequency current.
These also measure true RMS for an applied AC.
Digital
In much the same way as the analogue ammeter formed the basis for a wide variety of derived meters, including voltmeters, the basic mechanism for a digital meter is a digital voltmeter mechanism, and other types of meter are built around this.
Digital ammeter designs use a shunt resistor to produce a calibrated voltage proportional to the current flowing. This voltage is then measured by a digital voltmeter, through use of an analog-to-digital converter (ADC); the digital display is calibrated to display the current through the shunt. Such instruments are often calibrated to indicate the RMS value for a sine wave only, but many designs will indicate true RMS within limitations of the wave crest factor.
Integrating
There is also a range of devices referred to as integrating ammeters.
In these ammeters the current is summed over time, giving as a result the product of current and time; which is proportional to the electrical charge transferred with that current. These can be used for metering energy (the charge needs to be multiplied by the voltage to give energy) or for estimating the charge of a battery or capacitor.
Picoammeter
A picoammeter, or pico ammeter, measures very low electric current, usually from the picoampere range at the lower end to the milliampere range at the upper end. Picoammeters are used where the current being measured is below the limits of sensitivity of other devices, such as multimeters.
Most picoammeters use a "virtual short" technique and have several different measurement ranges that must be switched between to cover multiple decades of measurement. Other modern picoammeters use log compression and a "current sink" method that eliminates range switching and associated voltage spikes.
Special design and usage considerations must be observed in order to reduce leakage current which may swamp measurements such as special insulators and driven shields. Triaxial cable is often used for probe connections.
Application
Ammeters must be connected in series with the circuit to be measured. For relatively small currents (up to a few amperes), an ammeter may pass the whole of the circuit current. For larger direct currents, a shunt resistor carries most of the circuit current and a small, accurately-known fraction of the current passes through the meter movement. For alternating current circuits, a current transformer may be used to provide a convenient small current to drive an instrument, such as 1 or 5 amperes, while the primary current to be measured is much larger (up to thousands of amperes). The use of a shunt or current transformer also allows convenient location of the indicating meter without the need to run heavy circuit conductors up to the point of observation. In the case of alternating current, the use of a current transformer also isolates the meter from the high voltage of the primary circuit. A shunt provides no such isolation for a direct-current ammeter, but where high voltages are used it may be possible to place the ammeter in the "return" side of the circuit which may be at low potential with respect to earth.
Ammeters must not be connected directly across a voltage source since their internal resistance is very low and excess current would flow. Ammeters are designed for a low voltage drop across their terminals, much less than one volt; the extra circuit losses produced by the ammeter are called its "burden" on the measured circuit(I).
Ordinary Weston-type meter movements can measure only milliamperes at most, because the springs and practical coils can carry only limited currents. To measure larger currents, a resistor called a shunt is placed in parallel with the meter. The resistances of shunts is in the integer to fractional milliohm range. Nearly all of the current flows through the shunt, and only a small fraction flows through the meter. This allows the meter to measure large currents. Traditionally, the meter used with a shunt has a full-scale deflection (FSD) of , so shunts are typically designed to produce a voltage drop of when carrying their full rated current.
To make a multi-range ammeter, a selector switch can be used to connect one of a number of shunts across the meter. It must be a make-before-break switch to avoid damaging current surges through the meter movement when switching ranges.
A better arrangement is the Ayrton shunt or universal shunt, invented by William E. Ayrton, which does not require a make-before-break switch. It also avoids any inaccuracy because of contact resistance. In the figure, assuming for example, a movement with a full-scale voltage of 50 mV and desired current ranges of 10 mA, 100 mA, and 1 A, the resistance values would be: R1 = 4.5 ohms, R2 = 0.45 ohm, R3 = 0.05 ohm. And if the movement resistance is 1000 ohms, for example, R1 must be adjusted to 4.525 ohms.
Switched shunts are rarely used for currents above 10 amperes.
Zero-center ammeters are used for applications requiring current to be measured with both polarities, common in scientific and industrial equipment. Zero-center ammeters are also commonly placed in series with a battery. In this application, the charging of the battery deflects the needle to one side of the scale (commonly, the right side) and the discharging of the battery deflects the needle to the other side. A special type of zero-center ammeter for testing high currents in cars and trucks has a pivoted bar magnet that moves the pointer, and a fixed bar magnet to keep the pointer centered with no current. The magnetic field around the wire carrying current to be measured deflects the moving magnet.
Since the ammeter shunt has a very low resistance, mistakenly wiring the ammeter in parallel with a voltage source will cause a short circuit, at best blowing a fuse, possibly damaging the instrument and wiring, and exposing an observer to injury.
In AC circuits, a current transformer can be used to convert the large current in the main circuit into a smaller current more suited to a meter. Some designs of transformer are able to directly convert the magnetic field around a conductor into a small AC current, typically either or at full rated current, that can be easily read by a meter. In a similar way, accurate AC/DC non-contact ammeters have been constructed using Hall effect magnetic field sensors. A portable hand-held clamp-on ammeter is a common tool for maintenance of industrial and commercial electrical equipment, which is temporarily clipped over a wire to measure current. Some recent types have a parallel pair of magnetically soft probes that are placed on either side of the conductor.
See also
Clamp meter
Class of accuracy in electrical measurements
Electric circuit
Electrical measurements
Electrical current#Measurement
Electronics
List of electronics topics
Measurement category
Multimeter
Ohmmeter
Rheoscope
Voltmeter
Notes
References
External links
— from Lessons in Electric Circuits series main page
Electrical meters
Electronic test equipment
Flow meters
|
https://en.wikipedia.org/wiki/Amoxicillin
|
Amoxicillin is an antibiotic medication belonging to the aminopenicillin class of the penicillin family. The drug is used to treat bacterial infections such as middle ear infection, strep throat, pneumonia, skin infections, odontogenic infections, and urinary tract infections. It is taken by mouth, or less commonly by injection.
Common adverse effects include nausea and rash. It may also increase the risk of yeast infections and, when used in combination with clavulanic acid, diarrhea. It should not be used in those who are allergic to penicillin. While usable in those with kidney problems, the dose may need to be decreased. Its use in pregnancy and breastfeeding does not appear to be harmful. Amoxicillin is in the β-lactam family of antibiotics.
Amoxicillin was discovered in 1958 and came into medical use in 1972. Amoxil was approved for medical use in the United States in 1974, and in the United Kingdom in 1977. It is on the (WHO) World Health Organization's List of Essential Medicines. It is one of the most commonly prescribed antibiotics in children. Amoxicillin is available as a generic medication. In 2020, it was the 40th most commonly prescribed medication in the United States, with more than 15million prescriptions.
Medical uses
Amoxicillin is used in the treatment of a number of infections, including acute otitis media, streptococcal pharyngitis, pneumonia, skin infections, urinary tract infections, Salmonella infections, Lyme disease, and chlamydia infections.
Acute otitis media
Children with acute otitis media who are younger than six months of age are generally treated with amoxicillin or other antibiotics. Although most children with acute otitis media who are older than two years old do not benefit from treatment with amoxicillin or other antibiotics, such treatment may be helpful in children younger than two years old with acute otitis media that is bilateral or accompanied by ear drainage. In the past, amoxicillin was dosed three times daily when used to treat acute otitis media, which resulted in missed doses in routine ambulatory practice. There is now evidence that two times daily dosing or once daily dosing has similar effectiveness.
Respiratory infections
Amoxicillin and amoxicillin-clavulanate have been recommended by guidelines as the drug of choice for bacterial sinusitis and other respiratory infections. Most sinusitis infections are caused by viruses, for which amoxicillin and amoxicillin-clavulanate are ineffective, and the small benefit gained by amoxicillin may be overridden by the adverse effects.
Amoxicillin is recommended as the preferred first-line treatment for community-acquired pneumonia in adults by the National Institute for Health and Care Excellence, either alone (mild to moderate severity disease) or in combination with a macrolide. The World Health Organization (WHO) recommends amoxicillin as first-line treatment for pneumonia that is not "severe". Amoxicillin is used in post-exposure inhalation of anthrax to prevent disease progression and for prophylaxis.
H. pylori
It is effective as one part of a multi-drug regimen for treatment of stomach infections of Helicobacter pylori. It is typically combined with a proton-pump inhibitor (such as omeprazole) and a macrolide antibiotic (such as clarithromycin); other drug combinations are also effective.
Lyme borreliosis
Amoxicillin is effective for treatment of early cutaneous Lyme borreliosis; the effectiveness and safety of oral amoxicillin is neither better nor worse than common alternatively-used antibiotics.
Odontogenic infections
Amoxicillin is used to treat odontogenic infections, infections of the tongue, lips, and other oral tissues. It may be prescribed following a tooth extraction, particularly in those with compromised immune systems.
Skin infections
Amoxicillin is occasionally used for the treatment of skin infections, such as acne vulgaris. It is often an effective treatment for cases of acne vulgaris that have responded poorly to other antibiotics, such as doxycycline and minocycline.
Infections in infants in resource-limited settings
Amoxicillin is recommended by the World Health Organization for the treatment of infants with signs and symptoms of pneumonia in resource-limited situations when the parents are unable or unwilling to accept hospitalization of the child. Amoxicillin in combination with gentamicin is recommended for the treatment of infants with signs of other severe infections when hospitalization is not an option.
Prevention of bacterial endocarditis
It is also used to prevent bacterial endocarditis and as a pain-reliever in high-risk people having dental work done, to prevent Streptococcus pneumoniae and other encapsulated bacterial infections in those without spleens, such as people with sickle-cell disease, and for both the prevention and the treatment of anthrax. The United Kingdom recommends against its use for infectious endocarditis prophylaxis. These recommendations do not appear to have changed the rates of infection for infectious endocarditis.
Combination treatment
Amoxicillin is susceptible to degradation by β-lactamase-producing bacteria, which are resistant to most β-lactam antibiotics, such as penicillin. For this reason, it may be combined with clavulanic acid, a β-lactamase inhibitor. This drug combination is commonly called co-amoxiclav.
Spectrum of activity
It is a moderate-spectrum, bacteriolytic, β-lactam antibiotic in the aminopenicillin family used to treat susceptible Gram-positive and Gram-negative bacteria. It is usually the drug of choice within the class because it is better-absorbed, following oral administration, than other β-lactam antibiotics.
In general, Streptococcus, Bacillus subtilis, Enterococcus, Haemophilus, Helicobacter, and Moraxella are susceptible to amoxicillin, whereas Citrobacter, Klebsiella and Pseudomonas aeruginosa are resistant to it. Some E. coli and most clinical strains of Staphylococcus aureus have developed resistance to amoxicillin to varying degrees.
Adverse effects
Adverse effects are similar to those for other β-lactam antibiotics, including nausea, vomiting, rashes, and antibiotic-associated colitis. Loose bowel movements (diarrhea) may also occur. Rarer adverse effects include mental changes, lightheadedness, insomnia, confusion, anxiety, sensitivity to lights and sounds, and unclear thinking. Immediate medical care is required upon the first signs of these adverse effects.
The onset of an allergic reaction to amoxicillin can be very sudden and intense; emergency medical attention must be sought as quickly as possible. The initial phase of such a reaction often starts with a change in mental state, skin rash with intense itching (often beginning in fingertips and around groin area and rapidly spreading), and sensations of fever, nausea, and vomiting. Any other symptoms that seem even remotely suspicious must be taken very seriously. However, more mild allergy symptoms, such as a rash, can occur at any time during treatment, even up to a week after treatment has ceased. For some people allergic to amoxicillin, the adverse effects can be fatal due to anaphylaxis.
Use of the amoxicillin/clavulanic acid combination for more than one week has caused a drug-induced immunoallergic-type hepatitis in some patients. Young children having ingested acute overdoses of amoxicillin manifested lethargy, vomiting, and renal dysfunction.
There is poor reporting of adverse effects of amoxicillin from clinical trials. For this reason, the severity and frequency of adverse effects from amoxicillin is probably higher than reported from clinical trials.
Nonallergic rash
Between 3 and 10% of children taking amoxicillin (or ampicillin) show a late-developing (>72 hours after beginning medication and having never taken penicillin-like medication previously) rash, which is sometimes referred to as the "amoxicillin rash". The rash can also occur in adults and may rarely be a component of the DRESS syndrome.
The rash is described as maculopapular or morbilliform (measles-like; therefore, in medical literature, it is called "amoxicillin-induced morbilliform rash".). It starts on the trunk and can spread from there. This rash is unlikely to be a true allergic reaction and is not a contraindication for future amoxicillin usage, nor should the current regimen necessarily be stopped. However, this common amoxicillin rash and a dangerous allergic reaction cannot easily be distinguished by inexperienced persons, so a healthcare professional is often required to distinguish between the two.
A nonallergic amoxicillin rash may also be an indicator of infectious mononucleosis. Some studies indicate about 80–90% of patients with acute Epstein–Barr virus infection treated with amoxicillin or ampicillin develop such a rash.
Interactions
Amoxicillin may interact with these drugs:
Anticoagulants (dabigatran, warfarin).
Methotrexate (chemotherapy and immunosuppressant).
Typhoid, Cholera and BCG vaccines.
Probenecid reduces renal excretion and increases blood levels of amoxicillin.
Oral contraceptives potentially become less effective.
Allopurinol (gout treatment).
Mycophenolate (immunosuppressant)
Pharmacology
Amoxicillin (α-amino-p-hydroxybenzyl penicillin) is a semisynthetic derivative of penicillin with a structure similar to ampicillin but with better absorption when taken by mouth, thus yielding higher concentrations in blood and in urine. Amoxicillin diffuses easily into tissues and body fluids. It will cross the placenta and is excreted into breastmilk in small quantities. It is metabolized by the liver and excreted into the urine. It has an onset of 30 minutes and a half-life of 3.7 hours in newborns and 1.4 hours in adults.
Amoxicillin attaches to the cell wall of susceptible bacteria and results in their death. It also is a bactericidal compound. It is effective against streptococci, pneumococci, enterococci, Haemophilus influenzae, Escherichia coli, Proteus mirabilis, Neisseria meningitidis, Neisseria gonorrhoeae, Shigella, Chlamydia trachomatis, Salmonella, Borrelia burgdorferi, and Helicobacter pylori. As a derivative of ampicillin, amoxicillin is a member of the penicillin family and, like penicillins, is a β-lactam antibiotic. It inhibits cross-linkage between the linear peptidoglycan polymer chains that make up a major component of the bacterial cell wall.
It has two ionizable groups in the physiological range (the amino group in alpha-position to the amide carbonyl group and the carboxyl group).
History
Amoxicillin was one of several semisynthetic derivatives of 6-aminopenicillanic acid (6-APA) developed by the Beecham Group in the 1960s. It was invented by Anthony Alfred Walter Long and John Herbert Charles Nayler, two British scientists. It became available in 1972 and was the second aminopenicillin to reach the market (after ampicillin in 1961). Co-amoxiclav became available in 1981.
Society and culture
Economics
Amoxicillin is relatively inexpensive. In 2022, a survey of 8 generic antibiotics commonly prescribed in the United States found their average cost to be about $42.67, while amoxicillin was sold for $12.14 on average.
Modes of delivery
Pharmaceutical manufacturers make amoxicillin in trihydrate form, for oral use available as capsules, regular, chewable and dispersible tablets, syrup and pediatric suspension for oral use, and as the sodium salt for intravenous administration.
An extended-release is available. The intravenous form of amoxicillin is not sold in the United States. When an intravenous aminopenicillin is required in the United States, ampicillin is typically used. When there is an adequate response to ampicillin, the course of antibiotic therapy may often be completed with oral amoxicillin.
Research with mice indicated successful delivery using intraperitoneally injected amoxicillin-bearing microparticles.
Names
"Amoxicillin" is the International Nonproprietary Name (INN), British Approved Name (BAN), and United States Adopted Name (USAN), while "amoxycillin" is the Australian Approved Name (AAN).
Amoxicillin is one of the semisynthetic penicillins discovered by former pharmaceutical company Beecham Group. The patent for amoxicillin has expired, thus amoxicillin and co-amoxiclav preparations are marketed under various brand names across the world.
Veterinary uses
Amoxicillin is also sometimes used as an antibiotic for animals. The use of amoxicillin for animals intended for human consumption (chickens, cattle, and swine for example) has been approved.
References
Further reading
External links
Carboxylic acids
Enantiopure drugs
GSK plc brands
Lyme disease
Penicillins
Phenethylamines
Phenols
Wikipedia medicine articles ready to translate
World Health Organization essential medicines
|
https://en.wikipedia.org/wiki/Alkali
|
In chemistry, an alkali (; from ) is a basic, ionic salt of an alkali metal or an alkaline earth metal. An alkali can also be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline, and less often, alkalescent, is commonly used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases.
Etymology
The word "alkali" is derived from Arabic al qalīy (or alkali), meaning the calcined ashes (see calcination), referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name Kalium), which ultimately derived from alkali.
Common properties of alkalis and bases
Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include:
Moderately concentrated solutions (over 10−3 M) have a pH of 10 or greater. This means that they will turn phenolphthalein from colorless to pink.
Concentrated solutions are caustic (causing chemical burns).
Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin.
Alkalis are normally water-soluble, although some like barium carbonate are only soluble when reacting with an acidic aqueous solution.
Difference between alkali and base
The terms "base" and "alkali" are often used interchangeably, particularly outside the context of chemistry and chemical engineering.
There are various, more specific definitions for the concept of an alkali. Alkalis are usually defined as a subset of the bases. One of two subsets is commonly chosen.
A basic salt of an alkali metal or alkaline earth metal (this includes Mg(OH)2 (magnesium hydroxide) but excludes NH3 (ammonia)).
Any base that is soluble in water and forms hydroxide ions or the solution of a base in water. (This includes both Mg(OH)2 and NH3, which forms NH4OH.)
The second subset of bases is also called an "Arrhenius base".
Alkali salts
Alkali salts are soluble hydroxides of alkali metals and alkaline earth metals, of which common examples are:
Sodium hydroxide (NaOH) – often called "caustic soda"
Potassium hydroxide (KOH) – commonly called "caustic potash"
Lye – generic term for either of two previous salts or their mixture
Calcium hydroxide (Ca(OH)2) – saturated solution known as "limewater"
Magnesium hydroxide (Mg(OH)2) – an atypical alkali since it has low solubility in water (although the dissolved portion is considered a strong base due to complete dissociation of its ions)
Alkaline soil
Soils with pH values that are higher than 7.3 are usually defined as being alkaline. These soils can occur naturally, due to the presence of alkali salts. Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalo grass), most plants prefer a mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems.
Alkali lakes
In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake.
Examples of alkali lakes:
Alkali Lake, Lake County, Oregon
Baldwin Lake, San Bernardino County, California
Bear Lake on the Utah–Idaho border
Lake Magadi in Kenya
Lake Turkana in Kenya
Mono Lake, near Owens Valley in California
Redberry Lake, Saskatchewan
Summer Lake, Lake County, Oregon
Tramping Lake, Saskatchewan
See also
Alkali metals
Alkaline earth metals
Base (chemistry)
References
Inorganic chemistry
|
https://en.wikipedia.org/wiki/Anemometer
|
In meteorology, an anemometer () is a device that measures wind speed and direction. It is a common instrument used in weather stations. The earliest known description of an anemometer was by Italian architect and author Leon Battista Alberti (1404–1472) in 1450.
History
The anemometer has changed little since its development in the 15th century. Alberti is said to have invented it around 1450. In the ensuing centuries numerous others, including Robert Hooke
(1635–1703), developed their own versions, with some mistakenly credited as its inventor. In 1846, Thomas Romney Robinson (1792–1882) improved the design by using four hemispherical cups and mechanical wheels. In 1926, Canadian meteorologist John Patterson (1872–1956) developed a three-cup anemometer, which was improved by Brevoort and Joiner in 1935. In 1991, Derek Weston added the ability to measure wind direction. In 1994, Andreas Pflitsch developed the sonic anemometer.
Velocity anemometers
Cup anemometers
A simple type of anemometer was invented in 1845 by Rev Dr John Thomas Romney Robinson of Armagh Observatory. It consisted of four hemispherical cups on horizontal arms mounted on a vertical shaft. The air flow past the cups in any horizontal direction turned the shaft at a rate roughly proportional to the wind's speed. Therefore, counting the shaft's revolutions over a set time interval produced a value proportional to the average wind speed for a wide range of speeds. This type of instrument is also called a rotational anemometer.
With a four-cup anemometer, the wind always has the hollow of one cup presented to it, and is blowing on the back of the opposing cup. Since a hollow hemisphere has a drag coefficient of .38 on the spherical side and 1.42 on the hollow side, more force is generated on the cup that presenting its hollow side to the wind. Because of this asymmetrical force, torque is generated on the anemometer's axis, causing it to spin.
Theoretically, the anemometer's speed of rotation should be proportional to the wind speed because the force produced on an object is proportional to the speed of the gas or fluid flowing past it. However, in practice, other factors influence the rotational speed, including turbulence produced by the apparatus, increasing drag in opposition to the torque produced by the cups and support arms, and friction on the mount point. When Robinson first designed his anemometer, he asserted that the cups moved one-third of the speed of the wind, unaffected by cup size or arm length. This was apparently confirmed by some early independent experiments, but it was incorrect. Instead, the ratio of the speed of the wind and that of the cups, the anemometer factor, depends on the dimensions of the cups and arms, and can have a value between two and a little over three. Once the error was discovered, all previous experiment involving anemometers had to be repeated.
The three-cup anemometer developed by Canadian John Patterson in 1926, and subsequent cup improvements by Brevoort & Joiner of the United States in 1935, led to a cupwheel design with a nearly linear response and an error of less than 3% up to . Patterson found that each cup produced maximum torque when it was at 45° to the wind flow. The three-cup anemometer also had a more constant torque and responded more quickly to gusts than the four-cup anemometer.
The three-cup anemometer was further modified by Australian Dr. Derek Weston in 1991 to also measure wind direction. He added a tag to one cup, causing the cupwheel speed to increase and decrease as the tag moved alternately with and against the wind. Wind direction is calculated from these cyclical changes in speed, while wind speed is determined from the average cupwheel speed.
Three-cup anemometers are currently the industry standard for wind resource assessment studies and practice.
Vane anemometers
One of the other forms of mechanical velocity anemometer is the vane anemometer. It may be described as a windmill or a propeller anemometer. Unlike the Robinson anemometer, whose axis of rotation is vertical, the vane anemometer must have its axis parallel to the direction of the wind and is therefore horizontal. Furthermore, since the wind varies in direction and the axis has to follow its changes, a wind vane or some other contrivance to fulfill the same purpose must be employed.
A vane anemometer thus combines a propeller and a tail on the same axis to obtain accurate and precise wind speed and direction measurements from the same instrument. The speed of the fan is measured by a rev counter and converted to a windspeed by an electronic chip. Hence, volumetric flow rate may be calculated if the cross-sectional area is known.
In cases where the direction of the air motion is always the same, as in ventilating shafts of mines and buildings, wind vanes known as air meters are employed, and give satisfactory results.
Hot-wire anemometers
Hot wire anemometers use a fine wire (on the order of several micrometres) electrically heated to some temperature above the ambient. Air flowing past the wire cools the wire. As the electrical resistance of most metals is dependent upon the temperature of the metal (tungsten is a popular choice for hot-wires), a relationship can be obtained between the resistance of the wire and the speed of the air. In most cases, they cannot be used to measure the direction of the airflow, unless coupled with a wind vane.
Several ways of implementing this exist, and hot-wire devices can be further classified as CCA (constant current anemometer), CVA (constant voltage anemometer) and CTA (constant-temperature anemometer). The voltage output from these anemometers is thus the result of some sort of circuit within the device trying to maintain the specific variable (current, voltage or temperature) constant, following Ohm's law.
Additionally, PWM (pulse-width modulation) anemometers are also used, wherein the velocity is inferred by the time length of a repeating pulse of current that brings the wire up to a specified resistance and then stops until a threshold "floor" is reached, at which time the pulse is sent again.
Hot-wire anemometers, while extremely delicate, have extremely high frequency-response and fine spatial resolution compared to other measurement methods, and as such are almost universally employed for the detailed study of turbulent flows, or any flow in which rapid velocity fluctuations are of interest.
An industrial version of the fine-wire anemometer is the thermal flow meter, which follows the same concept, but uses two pins or strings to monitor the variation in temperature. The strings contain fine wires, but encasing the wires makes them much more durable and capable of accurately measuring air, gas, and emissions flow in pipes, ducts, and stacks. Industrial applications often contain dirt that will damage the classic hot-wire anemometer.
Laser Doppler anemometers
In laser Doppler velocimetry, laser Doppler anemometers use a beam of light from a laser that is divided into two beams, with one propagated out of the anemometer. Particulates (or deliberately introduced seed material) flowing along with air molecules near where the beam exits reflect, or backscatter, the light back into a detector, where it is measured relative to the original laser beam. When the particles are in great motion, they produce a Doppler shift for measuring wind speed in the laser light, which is used to calculate the speed of the particles, and therefore the air around the anemometer.
Ultrasonic anemometers
Ultrasonic anemometers, first developed in the 1950s, use ultrasonic sound waves to measure wind velocity. They measure wind speed based on the time of flight of sonic pulses between pairs of transducers. Measurements from pairs of transducers can be combined to yield a measurement of velocity in 1-, 2-, or 3-dimensional flow. The spatial resolution is given by the path length between transducers, which is typically 10 to 20 cm. Ultrasonic anemometers can take measurements with very fine temporal resolution, 20 Hz or better, which makes them well suited for turbulence measurements. The lack of moving parts makes them appropriate for long-term use in exposed automated weather stations and weather buoys where the accuracy and reliability of traditional cup-and-vane anemometers are adversely affected by salty air or dust. Their main disadvantage is the distortion of the air flow by the structure supporting the transducers, which requires a correction based upon wind tunnel measurements to minimize the effect. An international standard for this process, ISO 16622 Meteorology—Ultrasonic anemometers/thermometers—Acceptance test methods for mean wind measurements is in general circulation. Another disadvantage is lower accuracy due to precipitation, where rain drops may vary the speed of sound.
Since the speed of sound varies with temperature, and is virtually stable with pressure change, ultrasonic anemometers are also used as thermometers.
Two-dimensional (wind speed and wind direction) sonic anemometers are used in applications such as weather stations, ship navigation, aviation, weather buoys and wind turbines. Monitoring wind turbines usually requires a refresh rate of wind speed measurements of 3 Hz, easily achieved by sonic anemometers. Three-dimensional sonic anemometers are widely used to measure gas emissions and ecosystem fluxes using the eddy covariance method when used with fast-response infrared gas analyzers or laser-based analyzers.
Two-dimensional wind sensors are of two types:
Two ultrasounds paths: These sensors have four arms. The disadvantage of this type of sensor is that when the wind comes in the direction of an ultrasound path, the arms disturb the airflow, reducing the accuracy of the resulting measurement.
Three ultrasounds paths: These sensors have three arms. They give one path redundancy of the measurement which improves the sensor accuracy and reduces aerodynamic turbulence.
Acoustic resonance anemometers
Acoustic resonance anemometers are a more recent variant of sonic anemometer. The technology was invented by Savvas Kapartis and patented in 1999. Whereas conventional sonic anemometers rely on time of flight measurement, acoustic resonance sensors use resonating acoustic (ultrasonic) waves within a small purpose-built cavity in order to perform their measurement.
Built into the cavity is an array of ultrasonic transducers, which are used to create the separate standing-wave patterns at ultrasonic frequencies. As wind passes through the cavity, a change in the wave's property occurs (phase shift). By measuring the amount of phase shift in the received signals by each transducer, and then by mathematically processing the data, the sensor is able to provide an accurate horizontal measurement of wind speed and direction.
Because acoustic resonance technology enables measurement within a small cavity, the sensors tend to be typically smaller in size than other ultrasonic sensors. The small size of acoustic resonance anemometers makes them physically strong and easy to heat, and therefore resistant to icing. This combination of features means that they achieve high levels of data availability and are well suited to wind turbine control and to other uses that require small robust sensors such as battlefield meteorology. One issue with this sensor type is measurement accuracy when compared to a calibrated mechanical sensor. For many end uses, this weakness is compensated for by the sensor's longevity and the fact that it does not require recalibration once installed.
Ping-pong ball anemometers
A common anemometer for basic use is constructed from a ping-pong ball attached to a string. When the wind blows horizontally, it presses on and moves the ball; because ping-pong balls are very lightweight, they move easily in light winds. Measuring the angle between the string-ball apparatus and the vertical gives an estimate of the wind speed.
This type of anemometer is mostly used for middle-school level instruction, which most students make on their own, but a similar device was also flown on the Phoenix Mars Lander.
Pressure anemometers
The first designs of anemometers that measure the pressure were divided into plate and tube classes.
Plate anemometers
These are the first modern anemometers. They consist of a flat plate suspended from the top so that the wind deflects the plate. In 1450, the Italian art architect Leon Battista Alberti invented the first mechanical anemometer; in 1664 it was re-invented by Robert Hooke (who is often mistakenly considered the inventor of the first anemometer). Later versions of this form consisted of a flat plate, either square or circular, which is kept normal to the wind by a wind vane. The pressure of the wind on its face is balanced by a spring. The compression of the spring determines the actual force which the wind is exerting on the plate, and this is either read off on a suitable gauge, or on a recorder. Instruments of this kind do not respond to light winds, are inaccurate for high wind readings, and are slow at responding to variable winds. Plate anemometers have been used to trigger high wind alarms on bridges.
Tube anemometers
James Lind's anemometer of 1775 consisted of a vertically mounted glass U tube containing a liquid manometer (pressure gauge), with one end bent out in a horizontal direction to face the wind flow and the other vertical end capped. Though the Lind was not the first it was the most practical and best known anemometer of this type. If the wind blows into the mouth of a tube it causes an increase of pressure on one side of the manometer. The wind over the open end of a vertical tube causes little change in pressure on the other side of the manometer. The resulting elevation difference in the two legs of the U tube is an indication of the wind speed. However, an accurate measurement requires that the wind speed be directly into the open end of the tube; small departures from the true direction of the wind causes large variations in the reading.
The successful metal pressure tube anemometer of William Henry Dines in 1892 utilized the same pressure difference between the open mouth of a straight tube facing the wind and a ring of small holes in a vertical tube which is closed at the upper end. Both are mounted at the same height. The pressure differences on which the action depends are very small, and special means are required to register them. The recorder consists of a float in a sealed chamber partially filled with water. The pipe from the straight tube is connected to the top of the sealed chamber and the pipe from the small tubes is directed into the bottom inside the float. Since the pressure difference determines the vertical position of the float this is a measure of the wind speed.
The great advantage of the tube anemometer lies in the fact that the exposed part can be mounted on a high pole, and requires no oiling or attention for years; and the registering part can be placed in any convenient position. Two connecting tubes are required. It might appear at first sight as though one connection would serve, but the differences in pressure on which these instruments depend are so minute, that the pressure of the air in the room where the recording part is placed has to be considered. Thus if the instrument depends on the pressure or suction effect alone, and this pressure or suction is measured against the air pressure in an ordinary room, in which the doors and windows are carefully closed and a newspaper is then burnt up the chimney, an effect may be produced equal to a wind of 10 mi/h (16 km/h); and the opening of a window in rough weather, or the opening of a door, may entirely alter the registration.
While the Dines anemometer had an error of only 1% at , it did not respond very well to low winds due to the poor response of the flat plate vane required to turn the head into the wind. In 1918 an aerodynamic vane with eight times the torque of the flat plate overcame this problem.
Pitot tube static anemometers
Modern tube anemometers use the same principle as in the Dines anemometer but using a different design. The implementation uses a pitot-static tube which is a pitot tube with two ports, pitot and static, that is normally used in measuring the airspeed of aircraft. The pitot port measures the dynamic pressure of the open mouth of a tube with pointed head facing wind, and the static port measures the static pressure from small holes along the side on that tube. The pitot tube is connected to a tail so that it always makes the tube's head to face the wind. Additionally, the tube is heated to prevent rime ice formation on the tube. There are two lines from the tube down to the devices to measure the difference in pressure of the two lines. The measurement devices can be manometers, pressure transducers, or analog chart recorders.
Effect of density on measurements
In the tube anemometer the dynamic pressure is actually being measured, although the scale is usually graduated as a velocity scale. If the actual air density differs from the calibration value, due to differing temperature, elevation or barometric pressure, a correction is required to obtain the actual wind speed. Approximately 1.5% (1.6% above 6,000 feet) should be added to the velocity recorded by a tube anemometer for each 1000 ft (5% for each kilometer) above sea-level.
Effect of icing
At airports, it is essential to have accurate wind data under all conditions, including freezing precipitation. Anemometry is also required in monitoring and controlling the operation of wind turbines, which in cold environments are prone to in-cloud icing. Icing alters the aerodynamics of an anemometer and may entirely block it from operating. Therefore, anemometers used in these applications must be internally heated. Both cup anemometers and sonic anemometers are presently available with heated versions.
Instrument location
In order for wind speeds to be comparable from location to location, the effect of the terrain needs to be considered, especially in regard to height. Other considerations are the presence of trees, and both natural canyons and artificial canyons (urban buildings). The standard anemometer height in open rural terrain is 10 meters.
See also
Air flow meter
Anemoi, for the ancient origin of the name of this technology
Anemoscope, ancient device for measuring or predicting wind direction or weather
Automated airport weather station
Night of the Big Wind
Particle image velocimetry
Savonius wind turbine
Wind power forecasting
Wind run
Windsock, a simple high-visibility indicator of approximate wind speed and direction
Notes
References
Meteorological Instruments, W.E. Knowles Middleton and Athelstan F. Spilhaus, Third Edition revised, University of Toronto Press, Toronto, 1953
Invention of the Meteorological Instruments, W. E. Knowles Middleton, The Johns Hopkins Press, Baltimore, 1969
External links
Description of the development and the construction of an ultrasonic anemometer
Animation Showing Sonic Principle of Operation (Time of Flight Theory) – Gill Instruments
Collection of historical anemometer
Principle of Operation: Acoustic Resonance measurement – FT Technologies
Thermopedia, "Anemometers (laser doppler)"
Thermopedia, "Anemometers (pulsed thermal)"
Thermopedia, "Anemometers (vane)"
The Rotorvane Anemometer. Measuring both wind speed and direction using a tagged three-cup sensor
Italian inventions
Measuring instruments
Meteorological instrumentation and equipment
Navigational equipment
Wind power
15th-century inventions
|
https://en.wikipedia.org/wiki/Arcturus
|
|- bgcolor="#FFFAFA"
| Note (category: variability): || H and K emission vary.
Arcturus is the brightest star in the northern constellation of Boötes. With an apparent visual magnitude of −0.05, it is the fourth-brightest star in the night sky, and the brightest in the northern celestial hemisphere. The name Arcturus originated from ancient Greece; it was then cataloged as α Boötis by Johann Bayer in 1603, which is Latinized to Alpha Boötis. Arcturus forms one corner of the Spring Triangle asterism.
Located relatively close at 36.7 light-years from the Sun, Arcturus is a single red giant of spectral type K1.5III—an aging star around 7.1 billion years old that has used up its core hydrogen and evolved off the main sequence. It is about the same mass as the Sun, but has expanded to 25 times its size and is around 170 times as luminous. Its diameter is 35 million kilometres. Thus far no companion has been detected.
Nomenclature
The traditional name Arcturus is Latinised from the ancient Greek Ἀρκτοῦρος (Arktouros) and means "Guardian of the Bear", ultimately from ἄρκτος (arktos), "bear" and οὖρος (ouros), "watcher, guardian".
The designation of Arcturus as α Boötis (Latinised to Alpha Boötis) was made by Johann Bayer in 1603. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Arcturus for α Boötis.
Observation
With an apparent visual magnitude of −0.05, Arcturus is the brightest star in the northern celestial hemisphere and the fourth-brightest star in the night sky, after Sirius (−1.46 apparent magnitude), Canopus (−0.72) and α Centauri (combined magnitude of −0.27). However, α Centauri AB is a binary star, whose components are both fainter than Arcturus. This makes Arcturus the third-brightest individual star, just ahead of α Centauri A (officially named Rigil Kentaurus), whose apparent magnitude . The French mathematician and astronomer Jean-Baptiste Morin observed Arcturus in the daytime with a telescope in 1635, a first for any star other than the Sun and supernovae. Arcturus has been seen at or just before sunset with the naked eye.
Arcturus is visible from both of Earth's hemispheres as it is located 19° north of the celestial equator. The star culminates at midnight on 27 April, and at 9 p.m. on June 10 being visible during the late northern spring or the southern autumn. From the northern hemisphere, an easy way to find Arcturus is to follow the arc of the handle of the Big Dipper (or Plough in the UK). By continuing in this path, one can find Spica, "Arc to Arcturus, then spike (or speed on) to Spica". Together with the bright stars Spica and Denebola (or Regulus, depending on the source), Arcturus is part of the Spring Triangle asterism. With Cor Caroli, these four stars form the Great Diamond asterism.
Ptolemy described Arcturus as subrufa ("slightly red"): it has a B-V color index of +1.23, roughly midway between Pollux (B-V +1.00) and Aldebaran (B-V +1.54).
η Boötis, or Muphrid, is only 3.3 light-years distant from Arcturus, and would have a visual magnitude −2.5, about as bright as Jupiter at its brightest from Earth, whereas an observer on the former system would find Arcturus with a magnitude -5.0, slightly brighter than Venus as seen from Earth, but with an orangish color.
Physical characteristics
Based upon an annual parallax shift of 88.83 milliarcseconds as measured by the Hipparcos satellite, Arcturus is from the Sun. The parallax margin of error is 0.54 milliarcseconds, translating to a distance margin of error of ±. Because of its proximity, Arcturus has a high proper motion, two arcseconds a year, greater than any first magnitude star other than α Centauri.
Arcturus is moving rapidly () relative to the Sun, and is now almost at its closest point to the Sun. Closest approach will happen in about 4,000 years, when the star will be a few hundredths of a light-year closer to Earth than it is today. (In antiquity, Arcturus was closer to the centre of the constellation.) Arcturus is thought to be an old-disk star, and appears to be moving with a group of 52 other such stars, known as the Arcturus stream.
With an absolute magnitude of −0.30, Arcturus is, together with Vega and Sirius, one of the most luminous stars in the Sun's neighborhood. It is about 110 times brighter than the Sun in visible light wavelengths, but this underestimates its strength as much of the light it gives off is in the infrared; total (bolometric) power output is about 180 times that of the Sun. With a near-infrared J band magnitude of −2.2, only Betelgeuse (−2.9) and R Doradus (−2.6) are brighter. The lower output in visible light is due to a lower efficacy as the star has a lower surface temperature than the Sun.
As a single star, the mass of Arcturus cannot be measured directly, but models suggest it is slightly greater than that of the Sun. Evolutionary matching to the observed physical parameters gives a mass of , while the oxygen isotope ratio for a first dredge-up star gives a mass of . Given the star's evolutionary state, it is expected to have undergone significant mass loss in the past. The star displays magnetic activity that is heating the coronal structures, and it undergoes a solar-type magnetic cycle with a duration that is probably less than 14 years. A weak magnetic field has been detected in the photosphere with a strength of around half a gauss. The magnetic activity appears to lie along four latitudes and is rotationally modulated.
Arcturus is estimated to be around 6 to 8.5 billion years old, but there is some uncertainty about its evolutionary status. Based upon the color characteristics of Arcturus, it is currently ascending the red-giant branch and will continue to do so until it accumulates a large enough degenerate helium core to ignite the helium flash. It has likely exhausted the hydrogen from its core and is now in its active hydrogen shell burning phase. However, Charbonnel et al. (1998) placed it slightly above the horizontal branch, and suggested it has already completed the helium flash stage.
Spectrum
Arcturus has evolved off the main sequence to the red giant branch, reaching an early K-type stellar classification. It is frequently assigned the spectral type of K0III, but in 1989 was used as the spectral standard for type K1.5III Fe−0.5, with the suffix notation indicating a mild underabundance of iron compared to typical stars of its type. As the brightest K-type giant in the sky, it has been the subject of multiple atlases with coverage from the ultraviolet to infrared.
The spectrum shows a dramatic transition from emission lines in the ultraviolet to atomic absorption lines in the visible range and molecular absorption lines in the infrared. This is due to the optical depth of the atmosphere varying with wavelength. The spectrum shows very strong absorption in some molecular lines that are not produced in the photosphere but in a surrounding shell. Examination of carbon monoxide lines show the molecular component of the atmosphere extending outward to 2–3 times the radius of the star, with the chromospheric wind steeply accelerating to 35–40 km/s in this region.
Astronomers term "metals" those elements with higher atomic numbers than helium. The atmosphere of Arcturus has an enrichment of alpha elements relative to iron but only about a third of solar metallicity. Arcturus is possibly a Population II star.
Oscillations
As one of the brightest stars in the sky, Arcturus has been the subject of a number of studies in the emerging field of asteroseismology. Belmonte and colleagues carried out a radial velocity (Doppler shift of spectral lines) study of the star in April and May 1988, which showed variability with a frequency of the order of a few microhertz (μHz), the highest peak corresponding to 4.3 μHz (2.7 days) with an amplitude of 60 ms−1, with a frequency separation of c. 5 μHz. They suggested that the most plausible explanation for the variability of Arcturus is stellar oscillations.
Asteroseismological measurements allow direct calculation of the mass and radius, giving values of and . This form of modelling is still relatively inaccurate, but a useful check on other models.
Possible planetary system
Hipparcos satellite astrometry suggested that Arcturus is a binary star, with the companion about twenty times dimmer than the primary and orbiting close enough to be at the very limits of humans' current ability to make it out. Recent results remain inconclusive, but do support the marginal Hipparcos detection of a binary companion.
In 1993, radial velocity measurements of Aldebaran, Arcturus and Pollux showed that Arcturus exhibited a long-period radial velocity oscillation, which could be interpreted as a substellar companion. This substellar object would be nearly 12 times the mass of Jupiter and be located roughly at the same orbital distance from Arcturus as the Earth is from the Sun, at 1.1 astronomical units. However, all three stars surveyed showed similar oscillations yielding similar companion masses, and the authors concluded that the variation was likely to be intrinsic to the star rather than due to the gravitational effect of a companion. So far no substellar companion has been confirmed.
Mythology
One astronomical tradition associates Arcturus with the mythology around Arcas, who was about to shoot and kill his own mother Callisto who had been transformed into a bear. Zeus averted their imminent tragic fate by transforming the boy into the constellation Boötes, called Arctophylax "bear guardian" by the Greeks, and his mother into Ursa Major (Greek: Arctos "the bear"). The account is given in Hyginus's Astronomy.
Aratus in his Phaenomena said that the star Arcturus lay below the belt of Arctophylax, and according to Ptolemy in the Almagest it lay between his thighs.
An alternative lore associates the name with the legend around Icarius, who gave the gift of wine to other men, but was murdered by them, because they had had no experience with intoxication and mistook the wine for poison. It is stated this Icarius, became Arcturus, while his dog, Maira, became Canicula (Procyon), although "Arcturus" here may be used in the sense of the constellation rather than the star.
Cultural significance
As one of the brightest stars in the sky, Arcturus has been significant to observers since antiquity.
In ancient Mesopotamia, it was linked to the god Enlil, and also known as Shudun, "yoke", or SHU-PA of unknown derivation in the Three Stars Each Babylonian star catalogues and later MUL.APIN around 1100 BC.
In ancient Greek the star is found in ancient astronomical literature, e.g. Hesiod's Work and Days, circa 700 BC, as well as Hipparchus's and Ptolemy's star catalogs. The folk-etymology connecting the star name with the bears (Greek: ἄρκτος, arktos) was probably invented much later. It fell out of use in favour of Arabic names until it was revived in the Renaissance.
In Arabic, Arcturus is one of two stars called al-simāk "the uplifted ones" (the other is Spica). Arcturus is specified as السماك الرامح as-simāk ar-rāmiħ "the uplifted one of the lancer". The term Al Simak Al Ramih has appeared in Al Achsasi Al Mouakket catalogue (translated into Latin as Al Simak Lanceator). This has been variously romanized in the past, leading to obsolete variants such as Aramec and Azimech. For example, the name Alramih is used in Geoffrey Chaucer's A Treatise on the Astrolabe (1391). Another Arabic name is Haris-el-sema, from حارس السماء ħāris al-samā’ "the keeper of heaven". or حارس الشمال ħāris al-shamāl’ "the keeper of north".
In Indian astronomy, Arcturus is called Swati or Svati (Devanagari स्वाति, Transliteration IAST svāti, svātī́), possibly 'su' + 'ati' ("great goer", in reference to its remoteness) meaning very beneficent. It has been referred to as "the real pearl" in Bhartṛhari's kāvyas.
In Chinese astronomy, Arcturus is called Da Jiao (), because it is the brightest star in the Chinese constellation called Jiao Xiu (). Later it became a part of another constellation Kang Xiu ().
The Wotjobaluk Koori people of southeastern Australia knew Arcturus as Marpean-kurrk, mother of Djuit (Antares) and another star in Boötes, Weet-kurrk (Muphrid). Its appearance in the north signified the arrival of the larvae of the wood ant (a food item) in spring. The beginning of summer was marked by the star's setting with the Sun in the west and the disappearance of the larvae. The people of Milingimbi Island in Arnhem Land saw Arcturus and Muphrid as man and woman, and took the appearance of Arcturus at sunrise as a sign to go and harvest rakia or spikerush. The Weilwan of northern New South Wales knew Arcturus as Guembila "red".
Prehistoric Polynesian navigators knew Arcturus as Hōkūleʻa, the "Star of Joy". Arcturus is the zenith star of the Hawaiian Islands. Using Hōkūleʻa and other stars, the Polynesians launched their double-hulled canoes from Tahiti and the Marquesas Islands. Traveling east and north they eventually crossed the equator and reached the latitude at which Arcturus would appear directly overhead in the summer night sky. Knowing they had arrived at the exact latitude of the island chain, they sailed due west on the trade winds to landfall. If Hōkūleʻa could be kept directly overhead, they landed on the southeastern shores of the Big Island of Hawaii. For a return trip to Tahiti the navigators could use Sirius, the zenith star of that island. Since 1976, the Polynesian Voyaging Society's Hōkūleʻa has crossed the Pacific Ocean many times under navigators who have incorporated this wayfinding technique in their non-instrument navigation.
Arcturus had several other names that described its significance to indigenous Polynesians. In the Society Islands, Arcturus, called Ana-tahua-taata-metua-te-tupu-mavae ("a pillar to stand by"), was one of the ten "pillars of the sky", bright stars that represented the ten heavens of the Tahitian afterlife. In Hawaii, the pattern of Boötes was called Hoku-iwa, meaning "stars of the frigatebird". This constellation marked the path for Hawaiʻiloa on his return to Hawaii from the South Pacific Ocean. The Hawaiians called Arcturus Hoku-leʻa. It was equated to the Tuamotuan constellation Te Kiva, meaning "frigatebird", which could either represent the figure of Boötes or just Arcturus. However, Arcturus may instead be the Tuamotuan star called Turu. The Hawaiian name for Arcturus as a single star was likely Hoku-leʻa, which means "star of gladness", or "clear star". In the Marquesas Islands, Arcturus was probably called Tau-tou and was the star that ruled the month approximating January. The Māori and Moriori called it Tautoru, a variant of the Marquesan name and a name shared with Orion's Belt.
In Inuit astronomy, Arcturus is called the Old Man (Uttuqalualuk in Inuit languages) and The First Ones (Sivulliik in Inuit languages).
The Miꞌkmaq of eastern Canada saw Arcturus as Kookoogwéss, the owl.
Early-20th-century Armenian scientist Nazaret Daghavarian theorized that the star commonly referred to in Armenian folklore as Gutani astgh (Armenian: Գութանի աստղ; lit. star of the plow) was in fact Arcturus, as the constellation of Boötes was called "Ezogh" (Armenian: Եզող; lit. the person who is plowing) by Armenians.
In popular culture
In Ancient Rome, the star's celestial activity was supposed to portend tempestuous weather, and a personification of the star acts as narrator of the prologue to Plautus' comedy Rudens (circa 211 BC).
The Kāraṇḍavyūha Sūtra, compiled at the end of the 4th century or beginning of the 5th century, names one of Avalokiteśvara's meditative absorptions as "The face of Arcturus".
One of the possible etymologies offered for the name "Arthur" assumes that it is derived from "Arcturus" and that the late 5th to early 6th-century figure on whom the myth of King Arthur is based was originally named for the star.
In the Middle Ages, Arcturus was considered a Behenian fixed star and attributed to the stone Jasper and the plantain herb. Cornelius Agrippa listed its kabbalistic sign under the alternate name Alchameth.
Arcturus's light was employed in the mechanism used to open the 1933 Chicago World's Fair. The star was chosen as it was thought that light from Arcturus had started its journey at about the time of the previous Chicago World's Fair in 1893 (at 36.7 light-years away, the light actually started in 1896).
At the height of the American Civil War, President Abraham Lincoln observed Arcturus through a 9.6-inch refractor telescope when he visited the Naval Observatory in Washington, DC, in August, 1863.
References
Further reading
</ref>
External links
SolStation.com entry
K-type giants
Suspected variables
Hypothetical planetary systems
Arcturus moving group
Boötes
Bootis, Alpha
BD+19 2777
Bootis, 16
0541
124897
069673
5340
TIC objects
|
https://en.wikipedia.org/wiki/Altair
|
Altair is the brightest star in the constellation of Aquila and the twelfth-brightest star in the night sky. It has the Bayer designation Alpha Aquilae, which is Latinised from α Aquilae and abbreviated Alpha Aql or α Aql. Altair is an A-type main-sequence star with an apparent visual magnitude of 0.77 and is one of the vertices of the Summer Triangle asterism; the other two vertices are marked by Deneb and Vega. It is located at a distance of from the Sun. Altair is currently in the G-cloud—a nearby interstellar cloud, an accumulation of gas and dust.
Altair rotates rapidly, with a velocity at the equator of approximately 286 km/s. This is a significant fraction of the star's estimated breakup speed of 400 km/s. A study with the Palomar Testbed Interferometer revealed that Altair is not spherical, but is flattened at the poles due to its high rate of rotation. Other interferometric studies with multiple telescopes, operating in the infrared, have imaged and confirmed this phenomenon.
Nomenclature
α Aquilae (Latinised to Alpha Aquilae) is the star's Bayer designation. The traditional name Altair has been used since medieval times. It is an abbreviation of the Arabic phrase Al-Nisr Al-Ṭa'ir, "".
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Altair for this star. It is now so entered in the IAU Catalog of Star Names.
Physical characteristics
Along with β Aquilae and γ Aquilae, Altair forms the well-known line of stars sometimes referred to as the Family of Aquila or Shaft of Aquila.
Altair is a type-A main-sequence star with about 1.8 times the mass of the Sun and 11 times its luminosity. It is thought to be a young star close to the zero age main sequence at about 100 million years old, although previous estimates gave an age closer to one billion years old. Altair rotates rapidly, with a rotational period of under eight hours; for comparison, the equator of the Sun makes a complete rotation in a little more than 25 days, but Altair's rotation is similar to, and slightly faster than, those of Jupiter and Saturn. Like those two planets, its rapid rotation causes the star to be oblate; its equatorial diameter is over 20 percent greater than its polar diameter.
Satellite measurements made in 1999 with the Wide Field Infrared Explorer showed that the brightness of Altair fluctuates slightly, varying by just a few thousandths of a magnitude with several different periods less than 2 hours. As a result, it was identified in 2005 as a Delta Scuti variable star. Its light curve can be approximated by adding together a number of sine waves, with periods that range between 0.8 and 1.5 hours. It is a weak source of coronal X-ray emission, with the most active sources of emission being located near the star's equator. This activity may be due to convection cells forming at the cooler equator.
Rotational effects
The angular diameter of Altair was measured interferometrically by R. Hanbury Brown and his co-workers at Narrabri Observatory in the 1960s. They found a diameter of 3milliarcseconds. Although Hanbury Brown et al. realized that Altair would be rotationally flattened, they had insufficient data to experimentally observe its oblateness. Later, using infrared interferometric measurements made by the Palomar Testbed Interferometer in 1999 and 2000, Altair was found to be flattened. This work was published by G. T. van Belle, David R. Ciardi and their co-authors in 2001.
Theory predicts that, owing to Altair's rapid rotation, its surface gravity and effective temperature should be lower at the equator, making the equator less luminous than the poles. This phenomenon, known as gravity darkening or the von Zeipel effect, was confirmed for Altair by measurements made by the Navy Precision Optical Interferometer in 2001, and analyzed by Ohishi et al. (2004) and Peterson et al. (2006). Also, A. Domiciano de Souza et al. (2005) verified gravity darkening using the measurements made by the Palomar and Navy interferometers, together with new measurements made by the VINCI instrument at the VLTI.
Altair is one of the few stars for which a direct image has been obtained. In 2006 and 2007, J. D. Monnier and his coworkers produced an image of Altair's surface from 2006 infrared observations made with the MIRC instrument on the CHARA array interferometer; this was the first time the surface of any main-sequence star, apart from the Sun, had been imaged. The false-color image was published in 2007. The equatorial radius of the star was estimated to be 2.03 solar radii, and the polar radius 1.63 solar radii—a 25% increase of the stellar radius from pole to equator. The polar axis is inclined by about 60° to the line of sight from the Earth.
Etymology, mythology and culture
The term Al Nesr Al Tair appeared in Al Achsasi al Mouakket's catalogue, which was translated into Latin as Vultur Volans. This name was applied by the Arabs to the asterism of Altair, β Aquilae and γ Aquilae and probably goes back to the ancient Babylonians and Sumerians, who called Altair "the eagle star". The spelling Atair has also been used. Medieval astrolabes of England and Western Europe depicted Altair and Vega as birds.
The Koori people of Victoria also knew Altair as Bunjil, the wedge-tailed eagle, and β and γ Aquilae are his two wives the black swans. The people of the Murray River knew the star as Totyerguil. The Murray River was formed when Totyerguil the hunter speared Otjout, a giant Murray cod, who, when wounded, churned a channel across southern Australia before entering the sky as the constellation Delphinus.
In Chinese belief, the asterism consisting of Altair, β Aquilae and γ Aquilae is known as Hé Gǔ (; lit. "river drum"). The Chinese name for Altair is thus Hé Gǔ èr (; lit. "river drum two", meaning the "second star of the drum at the river"). However, Altair is better known by its other names: Qiān Niú Xīng ( / ) or Niú Láng Xīng (), translated as the cowherd star. These names are an allusion to a love story, The Cowherd and the Weaver Girl, in which Niulang (represented by Altair) and his two children (represented by β Aquilae and γ Aquilae) are separated from respectively their wife and mother Zhinu (represented by Vega) by the Milky Way. They are only permitted to meet once a year, when magpies form a bridge to allow them to cross the Milky Way.
The people of Micronesia called Altair Mai-lapa, meaning "big/old breadfruit", while the Māori people called this star Poutu-te-rangi, meaning "pillar of heaven".
In Western astrology, the star was ill-omened, portending danger from reptiles.
This star is one of the asterisms used by Bugis sailors for navigation, called bintoéng timoro, meaning "eastern star".
A group of Japanese scientists sent a radio signal to Altair in 1983 with the hopes of contacting extraterrestrial life.
NASA announced Altair as the name of the Lunar Surface Access Module (LSAM) on December 13, 2007. The Russian-made Beriev Be-200 Altair seaplane is also named after the star.
Visual companions
The bright primary star has the multiple star designation WDS 19508+0852A and has several faint visual companion stars, WDS 19508+0852B, C, D, E, F and G. All are much more distant than Altair and not physically associated.
See also
Lists of stars
List of brightest stars
List of nearest bright stars
Historical brightest stars
List of most luminous stars
Notes
References
External links
Star with Midriff Bulge Eyed by Astronomers, JPL press release, July 25, 2001.
Spectrum of Altair
Imaging the Surface of Altair, University of Michigan news release detailing the CHARA array direct imaging of the stellar surface in 2007.
PIA04204: Altair, NASA. Image of Altair from the Palomar Testbed Interferometer.
Altair, SolStation.
Secrets of Sun-like star probed, BBC News, June 1, 2007.
Astronomers Capture First Images of the Surface Features of Altair , Astromart.com
Image of Altair from Aladin.
Aquila (constellation)
A-type main-sequence stars
4
Aquilae, 53
Aquilae, Alpha
187642
097649
7557
Delta Scuti variables
Altair
BD+08 4236
G-Cloud
Astronomical objects known since antiquity
0768
TIC objects
|
https://en.wikipedia.org/wiki/Asymptote
|
In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity.
The word asymptote is derived from the Greek ἀσύμπτωτος (asumptōtos) which means "not falling together", from ἀ priv. + σύν "together" + πτωτ-ός "fallen". The term was introduced by Apollonius of Perga in his work on conic sections, but in contrast to its modern meaning, he used it to mean any line that does not intersect the given curve.
There are three kinds of asymptotes: horizontal, vertical and oblique. For curves given by the graph of a function , horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to Vertical asymptotes are vertical lines near which the function grows without bound. An oblique asymptote has a slope that is non-zero but finite, such that the graph of the function approaches it as x tends to
More generally, one curve is a curvilinear asymptote of another (as opposed to a linear asymptote) if the distance between the two curves tends to zero as they tend to infinity, although the term asymptote by itself is usually reserved for linear asymptotes.
Asymptotes convey information about the behavior of curves in the large, and determining the asymptotes of a function is an important step in sketching its graph. The study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis.
Introduction
The idea that a curve may come arbitrarily close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a computer screen have a positive width. So if they were to be extended far enough they would seem to merge, at least as far as the eye could discern. But these are physical representations of the corresponding mathematical entities; the line and the curve are idealized concepts whose width is 0 (see Line). Therefore, the understanding of the idea of an asymptote requires an effort of reason rather than experience.
Consider the graph of the function shown in this section. The coordinates of the points on the curve are of the form where x is a number other than 0. For example, the graph contains the points (1, 1), (2, 0.5), (5, 0.2), (10, 0.1), ... As the values of become larger and larger, say 100, 1,000, 10,000 ..., putting them far to the right of the illustration, the corresponding values of , .01, .001, .0001, ..., become infinitesimal relative to the scale shown. But no matter how large becomes, its reciprocal is never 0, so the curve never actually touches the x-axis. Similarly, as the values of become smaller and smaller, say .01, .001, .0001, ..., making them infinitesimal relative to the scale shown, the corresponding values of , 100, 1,000, 10,000 ..., become larger and larger. So the curve extends farther and farther upward as it comes closer and closer to the y-axis. Thus, both the x and y-axis are asymptotes of the curve. These ideas are part of the basis of concept of a limit in mathematics, and this connection is explained more fully below.
Asymptotes of functions
The asymptotes most commonly encountered in the study of calculus are of curves of the form . These can be computed using limits and classified into horizontal, vertical and oblique asymptotes depending on their orientation. Horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to +∞ or −∞. As the name indicates they are parallel to the x-axis. Vertical asymptotes are vertical lines (perpendicular to the x-axis) near which the function grows without bound. Oblique asymptotes are diagonal lines such that the difference between the curve and the line approaches 0 as x tends to +∞ or −∞.
Vertical asymptotes
The line x = a is a vertical asymptote of the graph of the function if at least one of the following statements is true:
where is the limit as x approaches the value a from the left (from lesser values), and is the limit as x approaches a from the right.
For example, if ƒ(x) = x/(x–1), the numerator approaches 1 and the denominator approaches 0 as x approaches 1. So
and the curve has a vertical asymptote x = 1.
The function ƒ(x) may or may not be defined at a, and its precise value at the point x = a does not affect the asymptote. For example, for the function
has a limit of +∞ as , ƒ(x) has the vertical asymptote , even though ƒ(0) = 5. The graph of this function does intersect the vertical asymptote once, at (0, 5). It is impossible for the graph of a function to intersect a vertical asymptote (or a vertical line in general) in more than one point. Moreover, if a function is continuous at each point where it is defined, it is impossible that its graph does intersect any vertical asymptote.
A common example of a vertical asymptote is the case of a rational function at a point x such that the denominator is zero and the numerator is non-zero.
If a function has a vertical asymptote, then it isn't necessarily true that the derivative of the function has a vertical asymptote at the same place. An example is
at .
This function has a vertical asymptote at because
and
.
The derivative of is the function
.
For the sequence of points
for
that approaches both from the left and from the right, the values are constantly . Therefore, both one-sided limits of at can be neither nor . Hence doesn't have a vertical asymptote at .
Horizontal asymptotes
Horizontal asymptotes are horizontal lines that the graph of the function approaches as . The horizontal line y = c is a horizontal asymptote of the function y = ƒ(x) if
or .
In the first case, ƒ(x) has y = c as asymptote when x tends to , and in the second ƒ(x) has y = c as an asymptote as x tends to .
For example, the arctangent function satisfies
and
So the line is a horizontal asymptote for the arctangent when x tends to , and is a horizontal asymptote for the arctangent when x tends to .
Functions may lack horizontal asymptotes on either or both sides, or may have one horizontal asymptote that is the same in both directions. For example, the function has a horizontal asymptote at y = 0 when x tends both to and because, respectively,
Other common functions that have one or two horizontal asymptotes include (that has an hyperbola as it graph), the Gaussian function the error function, and the logistic function.
Oblique asymptotes
When a linear asymptote is not parallel to the x- or y-axis, it is called an oblique asymptote or slant asymptote. A function ƒ(x) is asymptotic to the straight line (m ≠ 0) if
In the first case the line is an oblique asymptote of ƒ(x) when x tends to +∞, and in the second case the line is an oblique asymptote of ƒ(x) when x tends to −∞.
An example is ƒ(x) = x + 1/x, which has the oblique asymptote y = x (that is m = 1, n = 0) as seen in the limits
Elementary methods for identifying asymptotes
The asymptotes of many elementary functions can be found without the explicit use of limits (although the derivations of such methods typically use limits).
General computation of oblique asymptotes for functions
The oblique asymptote, for the function f(x), will be given by the equation y = mx + n. The value for m is computed first and is given by
where a is either or depending on the case being studied. It is good practice to treat the two cases separately. If this limit doesn't exist then there is no oblique asymptote in that direction.
Having m then the value for n can be computed by
where a should be the same value used before. If this limit fails to exist then there is no oblique asymptote in that direction, even should the limit defining m exist. Otherwise is the oblique asymptote of ƒ(x) as x tends to a.
For example, the function has
and then
so that is the asymptote of ƒ(x) when x tends to +∞.
The function has
and then
, which does not exist.
So does not have an asymptote when x tends to +∞.
Asymptotes for rational functions
A rational function has at most one horizontal asymptote or oblique (slant) asymptote, and possibly many vertical asymptotes.
The degree of the numerator and degree of the denominator determine whether or not there are any horizontal or oblique asymptotes. The cases are tabulated below, where deg(numerator) is the degree of the numerator, and deg(denominator) is the degree of the denominator.
The vertical asymptotes occur only when the denominator is zero (If both the numerator and denominator are zero, the multiplicities of the zero are compared). For example, the following function has vertical asymptotes at x = 0, and x = 1, but not at x = 2.
Oblique asymptotes of rational functions
When the numerator of a rational function has degree exactly one greater than the denominator, the function has an oblique (slant) asymptote. The asymptote is the polynomial term after dividing the numerator and denominator. This phenomenon occurs because when dividing the fraction, there will be a linear term, and a remainder. For example, consider the function
shown to the right. As the value of x increases, f approaches the asymptote y = x. This is because the other term, 1/(x+1), approaches 0.
If the degree of the numerator is more than 1 larger than the degree of the denominator, and the denominator does not divide the numerator, there will be a nonzero remainder that goes to zero as x increases, but the quotient will not be linear, and the function does not have an oblique asymptote.
Transformations of known functions
If a known function has an asymptote (such as y=0 for f(x)=ex), then the translations of it also have an asymptote.
If x=a is a vertical asymptote of f(x), then x=a+h is a vertical asymptote of f(x-h)
If y=c is a horizontal asymptote of f(x), then y=c+k is a horizontal asymptote of f(x)+k
If a known function has an asymptote, then the scaling of the function also have an asymptote.
If y=ax+b is an asymptote of f(x), then y=cax+cb is an asymptote of cf(x)
For example, f(x)=ex-1+2 has horizontal asymptote y=0+2=2, and no vertical or oblique asymptotes.
General definition
Let be a parametric plane curve, in coordinates A(t) = (x(t),y(t)). Suppose that the curve tends to infinity, that is:
A line ℓ is an asymptote of A if the distance from the point A(t) to ℓ tends to zero as t → b. From the definition, only open curves that have some infinite branch can have an asymptote. No closed curve can have an asymptote.
For example, the upper right branch of the curve y = 1/x can be defined parametrically as x = t, y = 1/t (where t > 0). First, x → ∞ as t → ∞ and the distance from the curve to the x-axis is 1/t which approaches 0 as t → ∞. Therefore, the x-axis is an asymptote of the curve. Also, y → ∞ as t → 0 from the right, and the distance between the curve and the y-axis is t which approaches 0 as t → 0. So the y-axis is also an asymptote. A similar argument shows that the lower left branch of the curve also has the same two lines as asymptotes.
Although the definition here uses a parameterization of the curve, the notion of asymptote does not depend on the parameterization. In fact, if the equation of the line is then the distance from the point A(t) = (x(t),y(t)) to the line is given by
if γ(t) is a change of parameterization then the distance becomes
which tends to zero simultaneously as the previous expression.
An important case is when the curve is the graph of a real function (a function of one real variable and returning real values). The graph of the function y = ƒ(x) is the set of points of the plane with coordinates (x,ƒ(x)). For this, a parameterization is
This parameterization is to be considered over the open intervals (a,b), where a can be −∞ and b can be +∞.
An asymptote can be either vertical or non-vertical (oblique or horizontal). In the first case its equation is x = c, for some real number c. The non-vertical case has equation , where m and are real numbers. All three types of asymptotes can be present at the same time in specific examples. Unlike asymptotes for curves that are graphs of functions, a general curve may have more than two non-vertical asymptotes, and may cross its vertical asymptotes more than once.
Curvilinear asymptotes
Let be a parametric plane curve, in coordinates A(t) = (x(t),y(t)), and B be another (unparameterized) curve. Suppose, as before, that the curve A tends to infinity. The curve B is a curvilinear asymptote of A if the shortest distance from the point A(t) to a point on B tends to zero as t → b. Sometimes B is simply referred to as an asymptote of A, when there is no risk of confusion with linear asymptotes.
For example, the function
has a curvilinear asymptote , which is known as a parabolic asymptote because it is a parabola rather than a straight line.
Asymptotes and curve sketching
Asymptotes are used in procedures of curve sketching. An asymptote serves as a guide line to show the behavior of the curve towards infinity. In order to get better approximations of the curve, curvilinear asymptotes have also been used although the term asymptotic curve seems to be preferred.
Algebraic curves
The asymptotes of an algebraic curve in the affine plane are the lines that are tangent to the projectivized curve through a point at infinity. For example, one may identify the asymptotes to the unit hyperbola in this manner. Asymptotes are often considered only for real curves, although they also make sense when defined in this way for curves over an arbitrary field.
A plane curve of degree n intersects its asymptote at most at n−2 other points, by Bézout's theorem, as the intersection at infinity is of multiplicity at least two. For a conic, there are a pair of lines that do not intersect the conic at any complex point: these are the two asymptotes of the conic.
A plane algebraic curve is defined by an equation of the form P(x,y) = 0 where P is a polynomial of degree n
where Pk is homogeneous of degree k. Vanishing of the linear factors of the highest degree term Pn defines the asymptotes of the curve: setting , if , then the line
is an asymptote if and are not both zero. If and , there is no asymptote, but the curve has a branch that looks like a branch of parabola. Such a branch is called a , even when it does not have any parabola that is a curvilinear asymptote. If the curve has a singular point at infinity which may have several asymptotes or parabolic branches.
Over the complex numbers, Pn splits into linear factors, each of which defines an asymptote (or several for multiple factors). Over the reals, Pn splits in factors that are linear or quadratic factors. Only the linear factors correspond to infinite (real) branches of the curve, but if a linear factor has multiplicity greater than one, the curve may have several asymptotes or parabolic branches. It may also occur that such a multiple linear factor corresponds to two complex conjugate branches, and does not corresponds to any infinite branch of the real curve. For example, the curve has no real points outside the square , but its highest order term gives the linear factor x with multiplicity 4, leading to the unique asymptote x=0.
Asymptotic cone
The hyperbola
has the two asymptotes
The equation for the union of these two lines is
Similarly, the hyperboloid
is said to have the asymptotic cone
The distance between the hyperboloid and cone approaches 0 as the distance from the origin approaches infinity.
More generally, consider a surface that has an implicit equation
where the are homogeneous polynomials of degree and . Then the equation defines a cone which is centered at the origin. It is called an asymptotic cone, because the distance to the cone of a point of the surface tends to zero when the point on the surface tends to infinity.
See also
Big O notation
References
General references
Specific references
External links
Hyperboloid and Asymptotic Cone, string surface model, 1872 from the Science Museum
Mathematical analysis
Analytic geometry
|
https://en.wikipedia.org/wiki/Afterglow
|
An afterglow in meteorology consists of several atmospheric optical phenomena, with a general definition as a broad arch of whitish or pinkish sunlight in the twilight sky, consisting of the bright segment and the purple light. Purple light mainly occurs when the Sun is 2–6° below the horizon, from civil to nautical twilight, while the bright segment lasts until the end of the nautical twilight. Afterglow is often in cases of volcanic eruptions discussed, while its purple light is discussed as a different particular volcanic purple light. Specifically in volcanic occurrences it is light scattered by fine particulates, like dust, suspended in the atmosphere. In the case of alpenglow, which is similar to the Belt of Venus, afterglow is used in general for the golden-red glowing light from the sunset and sunrise reflected in the sky, and in particularly for its last stage, when the purple light is reflected. The opposite of an afterglow is a foreglow, which occurs before sunrise.
Sunlight reaches Earth around civil twilight during golden hour intensely in its low-energy and low-frequency red component.
During this part of civil twilight after sunset and before sundawn the red sunlight remains visible by scattering through particles in the air. Backscattering, possibly after being reflected off clouds or high snowfields in mountain regions, furthermore creates a reddish to pinkish light. The high-energy and high-frequency components of light towards blue are scattered out broadly, producing the broader blue light of nautical twilight before or after the reddish light of civil twilight, while in combination with the reddish light producing the purple light. This period of blue dominating is referred to as the blue hour and is, like the golden hour, widely treasured by photographers and painters.
After the 1883 eruption of the volcano Krakatoa, a remarkable series of red sunsets appeared worldwide. An enormous amount of exceedingly fine dust were blown to a great height by the volcano's explosion, and then globally diffused by the high atmospheric winds. Edvard Munch's painting The Scream possibly depicts an afterglow during this period.
See also
Airglow
Belt of Venus
Earth's shadow
Gegenschein
Red sky at morning
Sunset
References
External links
Atmospheric optical phenomena
es:Arrebol
fi:Purppuravalo
|
https://en.wikipedia.org/wiki/Agar
|
Agar ( or ), or agar-agar, is a jelly-like substance consisting of polysaccharides obtained from the cell walls of some species of red algae, primarily from "ogonori" (Gracilaria) and "tengusa" (Gelidiaceae). As found in nature, agar is a mixture of two components, the linear polysaccharide agarose and a heterogeneous mixture of smaller molecules called agaropectin. It forms the supporting structure in the cell walls of certain species of algae and is released on boiling. These algae are known as agarophytes, belonging to the Rhodophyta (red algae) phylum. The processing of food-grade agar removes the agaropectin, and the commercial product is essentially pure agarose.
Agar has been used as an ingredient in desserts throughout Asia and also as a solid substrate to contain culture media for microbiological work. Agar can be used as a laxative; an appetite suppressant; a vegan substitute for gelatin; a thickener for soups; in fruit preserves, ice cream, and other desserts; as a clarifying agent in brewing; and for sizing paper and fabrics.
Etymology
The word agar comes from agar-agar, the Malay name for red algae (Gigartina, Eucheuma, Gracilaria) from which the jelly is produced. It is also known as Kanten () (from the phrase kan-zarashi tokoroten () or “cold-exposed agar”), Japanese isinglass, China grass, Ceylon moss or Jaffna moss. Gracilaria edulis or its synonym G. lichenoides is specifically referred to as agal-agal or Ceylon agar.
History
Macroalgae have been used widely as food by coastal cultures, especially in Southeast Asia. In the Philippines, Gracilaria, known as gulaman (or gulaman dagat) in Tagalog, have been harvested and used as food for centuries, eaten both fresh or sun-dried and turned into jellies. The earliest historical attestation is from the Vocabulario de la lengua tagala (1754) by the Jesuit priests Juan de Noceda and Pedro de Sanlucar, where golaman or gulaman was defined as "una yerva, de que se haze conserva a modo de Halea, naze en la mar" ("an herb, from which a jam-like preserve is made, grows in the sea"), with an additional entry for guinolaman to refer to food made with the jelly.
Carrageenan, derived from gusô (Eucheuma spp.), which also congeals into a gel-like texture is also used similarly among the Visayan peoples and have been recorded in the even earlier Diccionario De La Lengua Bisaya, Hiligueina y Haraia de la isla de Panay y Sugbu y para las demas islas (c.1637) of the Augustinian missionary Alonso de Méntrida . In the book, Méntrida describes gusô as being cooked until it melts, and then allowed to congeal into a sour dish.
Jelly seaweeds were also favoured and foraged by Malay communities living on the coasts of the Riau Archipelago and Singapore in Southeast Asia for centuries.
The application of agar as a food additive in Japan is alleged to have been discovered in 1658 by Mino Tarōzaemon (), an innkeeper in current Fushimi-ku, Kyoto who, according to legend, was said to have discarded surplus seaweed soup (Tokoroten) and noticed that it gelled later after a winter night's freezing.
Agar was first subjected to chemical analysis in 1859 by the French chemist Anselme Payen, who had obtained agar from the marine algae Gelidium corneum.
Beginning in the late 19th century, agar began to be used as a solid medium for growing various microbes. Agar was first described for use in microbiology in 1882 by the German microbiologist Walther Hesse, an assistant working in Robert Koch's laboratory, on the suggestion of his wife Fanny Hesse. Agar quickly supplanted gelatin as the base of microbiological media, due to its higher melting temperature, allowing microbes to be grown at higher temperatures without the media liquefying.
With its newfound use in microbiology, agar production quickly increased. This production centered on Japan, which produced most of the world's agar until World War II. However, with the outbreak of World War II, many nations were forced to establish domestic agar industries in order to continue microbiological research. Around the time of World War II, approximately 2,500 tons of agar were produced annually. By the mid-1970s, production worldwide had increased dramatically to approximately 10,000 tons each year. Since then, production of agar has fluctuated due to unstable and sometimes over-utilized seaweed populations.
Chemical composition
Agar consists of a mixture of two polysaccharides: agarose and agaropectin, with agarose making up about 70% of the mixture, while agaropectin makes about 30% of it. Agarose is a linear polymer, made up of repeating units of agarobiose, a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agaropectin is a heterogeneous mixture of smaller molecules that occur in lesser amounts, and is made up of alternating units of D-galactose and L-galactose heavily modified with acidic side-groups, such as sulfate, glucuronate, and pyruvate.
Physical properties
Agar exhibits hysteresis because when mixed with water, it solidifies and forms a gel at about , which is called the gel point, and melts at , which is the melting point. Hysteresis of agar occurs due to the difference between the gel point and melting point temperatures. This property lends a suitable balance between easy melting and good gel stability at relatively high temperatures. Since many scientific applications require incubation at temperatures close to human body temperature (37 °C), agar is more appropriate than other solidifying agents that melt at this temperature, such as gelatin.
Uses
Culinary
Agar-agar is a natural vegetable gelatin counterpart. It is white and semi-translucent when sold in packages as washed and dried strips or in powdered form. It can be used to make jellies, puddings, and custards. When making jelly, it is boiled in water until the solids dissolve. Sweetener, flavoring, coloring, fruits and or vegetables are then added, and the liquid is poured into molds to be served as desserts and vegetable aspics or incorporated with other desserts such as a layer of jelly in a cake.
Agar-agar is approximately 80% dietary fiber, so it can serve as an intestinal regulator. Its bulking quality has been behind fad diets in Asia, for example the kanten (the Japanese word for agar-agar) diet. Once ingested, kanten triples in size and absorbs water. This results in the consumers feeling fuller.
Asian culinary
One use of agar in Japanese cuisine (Wagashi) is anmitsu, a dessert made of small cubes of agar jelly and served in a bowl with various fruits or other ingredients. It is also the main ingredient in mizu yōkan, another popular Japanese food. In Philippine cuisine, it is used to make the jelly bars in the various gulaman refreshments like Sago't Gulaman, Samalamig, or desserts such as buko pandan, agar flan, halo-halo, fruit cocktail jelly, and the black and red gulaman used in various fruit salads. In Vietnamese cuisine, jellies made of flavored layers of agar agar, called thạch, are a popular dessert, and are often made in ornate molds for special occasions. In Indian cuisine, agar is used for making desserts. In Burmese cuisine, a sweet jelly known as kyauk kyaw is made from agar. Agar jelly is widely used in Taiwanese bubble tea.
Other culinary
It can be used as addition to or as a replacement for pectin in jams and marmalades, as a substitute to gelatin for its superior gelling properties, and as a strengthening ingredient in souffles and custards. Another use of agar-agar is in a Russian dish ptich'ye moloko (bird's milk), a rich jellified custard (or soft meringue) used as a cake filling or chocolate-glazed as individual sweets.
Agar-agar may also be used as the gelling agent in gel clarification, a culinary technique used to clarify stocks, sauces, and other liquids. Mexico has traditional candies made out of Agar gelatin, most of them in colorful, half-circle shapes that resemble a melon or watermelon fruit slice, and commonly covered with sugar. They are known in Spanish as Dulce de Agar (Agar sweets)
Agar-agar is an allowed nonorganic/nonsynthetic additive used as a thickener, gelling agent, texturizer, moisturizer, emulsifier, flavor enhancer, and absorbent in certified organic foods.
Microbiology
Agar plate
An agar plate or Petri dish is used to provide a growth medium using a mix of agar and other nutrients in which microorganisms, including bacteria and fungi, can be cultured and observed under the microscope. Agar is indigestible for many organisms so that microbial growth does not affect the gel used and it remains stable. Agar is typically sold commercially as a powder that can be mixed with water and prepared similarly to gelatin before use as a growth medium. Nutrients are typically added to meet the nutritional needs of the microbes organism, the formulations of which may be "undefined" where the precise composition is unknown, or "defined" where the exact chemical composition is known. Agar is often dispensed using a sterile media dispenser.
Different algae produce various types of agar. Each agar has unique properties that suit different purposes. Because of the agarose component, the agar solidifies. When heated, agarose has the potential to melt and then solidify. Because of this property, they are referred to as "physical gels." In contrast, polyacrylamide polymerization is an irreversible process, and the resulting products are known as chemical gels.
There are a variety of different types of agar that support the growth of different microorganisms. A nutrient agar may be permissive, allowing for the cultivation of any non-fastidious microorganisms; a commonly-used nutrient agar for bacteria is the Luria Bertani (LB) agar which contains lysogeny broth, a nutrient-rich medium used for bacterial growth. Other fastidious organisms may require the addition of different biological fluids such as horse or sheep blood, serum, egg yolk, and so on. Agar plates can also be selective, and can be used to promote the growth of bacteria of interest while inhibiting others. A variety of chemicals may be added to create an environment favourable for specific types of bacteria or bacteria with certain properties, but not conducive for growth of others. For example, antibiotics may be added in cloning experiments whereby bacteria with antibiotic-resistant plasmid are selected.
Motility assays
As a gel, an agar or agarose medium is porous and therefore can be used to measure microorganism motility and mobility. The gel's porosity is directly related to the concentration of agarose in the medium, so various levels of effective viscosity (from the cell's "point of view") can be selected, depending on the experimental objectives.
A common identification assay involves culturing a sample of the organism deep within a block of nutrient agar. Cells will attempt to grow within the gel structure. Motile species will be able to migrate, albeit slowly, throughout the gel, and infiltration rates can then be visualized, whereas non-motile species will show growth only along the now-empty path introduced by the invasive initial sample deposition.
Another setup commonly used for measuring chemotaxis and chemokinesis utilizes the under-agarose cell migration assay, whereby a layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient.
Plant biology
Research grade agar is used extensively in plant biology as it is optionally supplemented with a nutrient and/or vitamin mixture that allows for seedling germination in Petri dishes under sterile conditions (given that the seeds are sterilized as well). Nutrient and/or vitamin supplementation for Arabidopsis thaliana is standard across most experimental conditions. Murashige & Skoog (MS) nutrient mix and Gamborg's B5 vitamin mix in general are used. A 1.0% agar/0.44% MS+vitamin dH2O solution is suitable for growth media between normal growth temps.
When using agar, within any growth medium, it is important to know that the solidification of the agar is pH-dependent. The optimal range for solidification is between 5.4 and 5.7. Usually, the application of potassium hydroxide is needed to increase the pH to this range. A general guideline is about 600 µl 0.1M KOH per 250 ml GM. This entire mixture can be sterilized using the liquid cycle of an autoclave.
This medium nicely lends itself to the application of specific concentrations of phytohormones etc. to induce specific growth patterns in that one can easily prepare a solution containing the desired amount of hormone, add it to the known volume of GM, and autoclave to both sterilize and evaporate off any solvent that may have been used to dissolve the often-polar hormones. This hormone/GM solution can be spread across the surface of Petri dishes sown with germinated and/or etiolated seedlings.
Experiments with the moss Physcomitrella patens, however, have shown that choice of the gelling agent – agar or Gelrite – does influence phytohormone sensitivity of the plant cell culture.
Other uses
Agar is used:
As an impression material in dentistry.
As a medium to precisely orient the tissue specimen and secure it by agar pre-embedding (especially useful for small endoscopy biopsy specimens) for histopathology processing
To make salt bridges and gel plugs for use in electrochemistry.
In formicariums as a transparent substitute for sand and a source of nutrition.
As a natural ingredient in forming modeling clay for young children to play with.
As an allowed biofertilizer component in organic farming.
As a substrate for precipitin reactions in immunology.
At different times as a substitute for gelatin in photographic emulsions, arrowroot in preparing silver paper and as a substitute for fish glue in resist etching.
As an MRI elastic gel phantom to mimic tissue mechanical properties in Magnetic Resonance Elastography
Gelidium agar is used primarily for bacteriological plates. Gracilaria agar is used mainly in food applications.
In 2016, AMAM, a Japanese company, developed a prototype for Agar-based commercial packaging system called Agar Plasticity, intended as a replacement for oil-based plastic packaging.
See also
References
External links
Edible thickening agents
Microbiological gelling agent
Dental materials
Algal food ingredients
Red algae
Gels
Polysaccharides
Japanese inventions
Food stabilizers
Jams and jellies
E-number additives
Impression material
|
https://en.wikipedia.org/wiki/Antioxidant
|
Antioxidants are compounds that inhibit oxidation (usually occurring as autoxidation), a chemical reaction that can produce free radicals. Autoxidation leads to degradation of organic compounds, including living matter. Antioxidants are frequently added to industrial products, such as polymers, fuels, and lubricants, to extend their usable lifetimes. Food are also treated with antioxidants to forestall spoilage, in particular the rancidification of oils and fats. In cells, antioxidants such as glutathione, mycothiol or bacillithiol, and enzyme systems like superoxide dismutase, can prevent damage from oxidative stress.
Known dietary antioxidants are vitamins A, C, and E, but the term antioxidant has also been applied to numerous other dietary compounds that only have antioxidant properties in vitro, with little evidence for antioxidant properties in vivo. Dietary supplements marketed as antioxidants have not been shown to maintain health or prevent disease in humans.
History
As part of their adaptation from marine life, terrestrial plants began producing non-marine antioxidants such as ascorbic acid (vitamin C), polyphenols and tocopherols. The evolution of angiosperm plants between 50 and 200 million years ago resulted in the development of many antioxidant pigments – particularly during the Jurassic period – as chemical defences against reactive oxygen species that are byproducts of photosynthesis. Originally, the term antioxidant specifically referred to a chemical that prevented the consumption of oxygen. In the late 19th and early 20th centuries, extensive study concentrated on the use of antioxidants in important industrial processes, such as the prevention of metal corrosion, the vulcanization of rubber, and the polymerization of fuels in the fouling of internal combustion engines.
Early research on the role of antioxidants in biology focused on their use in preventing the oxidation of unsaturated fats, which is the cause of rancidity. Antioxidant activity could be measured simply by placing the fat in a closed container with oxygen and measuring the rate of oxygen consumption. However, it was the identification of vitamins C and E as antioxidants that revolutionized the field and led to the realization of the importance of antioxidants in the biochemistry of living organisms. The possible mechanisms of action of antioxidants were first explored when it was recognized that a substance with anti-oxidative activity is likely to be one that is itself readily oxidized. Research into how vitamin E prevents the process of lipid peroxidation led to the identification of antioxidants as reducing agents that prevent oxidative reactions, often by scavenging reactive oxygen species before they can damage cells.
Uses in technology
Food preservatives
Antioxidants are used as food additives to help guard against food deterioration. Exposure to oxygen and sunlight are the two main factors in the oxidation of food, so food is preserved by keeping in the dark and sealing it in containers or even coating it in wax, as with cucumbers. However, as oxygen is also important for plant respiration, storing plant materials in anaerobic conditions produces unpleasant flavors and unappealing colors. Consequently, packaging of fresh fruits and vegetables contains an ≈8% oxygen atmosphere. Antioxidants are an especially important class of preservatives as, unlike bacterial or fungal spoilage, oxidation reactions still occur relatively rapidly in frozen or refrigerated food. These preservatives include natural antioxidants such as ascorbic acid (AA, E300) and tocopherols (E306), as well as synthetic antioxidants such as propyl gallate (PG, E310), tertiary butylhydroquinone (TBHQ), butylated hydroxyanisole (BHA, E320) and butylated hydroxytoluene (BHT, E321).
Unsaturated fats can be highly susceptible to oxidation, causing rancidification. Oxidized lipids are often discolored and can impart unpleasant tastes and flavors. Thus, these foods are rarely preserved by drying; instead, they are preserved by smoking, salting, or fermenting. Even less fatty foods such as fruits are sprayed with sulfurous antioxidants prior to air drying. Metals catalyse oxidation. Some fatty foods such as olive oil are partially protected from oxidation by their natural content of antioxidants. Fatty foods are sensitive to photooxidation, which forms hydroperoxides by oxidizing unsaturated fatty acids and ester. Exposure to ultraviolet (UV) radiation can cause direct photooxidation and decompose peroxides and carbonyl molecules. These molecules undergo free radical chain reactions, but antioxidants inhibit them by preventing the oxidation processes.
Cosmetics preservatives
Antioxidant stabilizers are also added to fat-based cosmetics such as lipstick and moisturizers to prevent rancidity. Antioxidants in cosmetic products prevent oxidation of active ingredients and lipid content. For example, phenolic antioxidants such as stilbenes, flavonoids, and hydroxycinnamic acid strongly absorb UV radiation due to the presence of chromophores. They reduce oxidative stress from sun exposure by absorbing UV light.
Industrial uses
Antioxidants may be added to industrial products, such as stabilizers in fuels and additives in lubricants, to prevent oxidation and polymerization that leads to the formation of engine-fouling residues.
Antioxidant polymer stabilizers are widely used to prevent the degradation of polymers such as rubbers, plastics and adhesives that causes a loss of strength and flexibility in these materials. Polymers containing double bonds in their main chains, such as natural rubber and polybutadiene, are especially susceptible to oxidation and ozonolysis. They can be protected by antiozonants. Oxidation can be accelerated by UV radiation in natural sunlight to cause photo-oxidation. Various specialised light stabilisers, such as HALS may be added to plastics to prevent this. Synthetic phenolic and aminic antioxidants are increasingly being identified as potential human and environmental health hazards.
Environmental and health hazards
Synthetic phenolic antioxidants (SPAs) and aminic antioxidants have potential human and environmental health hazards. SPAs are common in indoor dust, small air particles, sediment, sewage, river water and wastewater. They are synthesized from phenolic compounds and include 2,6-di-tert-butyl-4-methylphenol (BHT), 2,6-di-tert-butyl-p-benzoquinone (BHT-Q), 2,4-di-tert-butyl-phenol (DBP) and 3-tert-butyl-4-hydroxyanisole (BHA). BHT can cause hepatotoxicity and damage to the endocrine system and may increase tumor development rates due to dimethylhydrazine. BHT-Q can cause DNA damage and mismatches through the cleavage process, generating superoxide radicals. DBP is toxic to marine life if exposed long-term. Phenolic antioxidants have low biodegradability, but they do not have severe toxicity toward aquatic organisms at low concentrations. Another type of antioxidant, diphenylamine (DPA), is commonly used in the production of commercial, industrial lubricants and rubber products and it also acts as a supplement for automotive engine oils.
Oxidative challenge in biology
The vast majority of complex life on Earth requires oxygen for its metabolism, but this same oxygen is a highly reactive element that can damage living organisms. Organisms contain chemicals and enzymes that minimize this oxidative damage without interfering with the beneficial effect of oxygen. In general, antioxidant systems either prevent these reactive species from being formed, or remove them, thus minimizing their damage. Reactive oxygen species can have useful cellular functions, such as redox signaling. Thus, ideally, antioxidant systems do not remove oxidants entirely, but maintain them at some optimum concentration.
Reactive oxygen species produced in cells include hydrogen peroxide (H2O2), hypochlorous acid (HClO), and free radicals such as the hydroxyl radical (·OH) and the superoxide anion (O2−). The hydroxyl radical is particularly unstable and will react rapidly and non-specifically with most biological molecules. This species is produced from hydrogen peroxide in metal-catalyzed redox reactions such as the Fenton reaction. These oxidants can damage cells by starting chemical chain reactions such as lipid peroxidation, or by oxidizing DNA or proteins. Damage to DNA can cause mutations and possibly cancer, if not reversed by DNA repair mechanisms, while damage to proteins causes enzyme inhibition, denaturation and protein degradation.
The use of oxygen as part of the process for generating metabolic energy produces reactive oxygen species. In this process, the superoxide anion is produced as a by-product of several steps in the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, since a highly reactive free radical is formed as an intermediate (Q·−). This unstable intermediate can lead to electron "leakage", when electrons jump directly to oxygen and form the superoxide anion, instead of moving through the normal series of well-controlled reactions of the electron transport chain. Peroxide is also produced from the oxidation of reduced flavoproteins, such as complex I. However, although these enzymes can produce oxidants, the relative importance of the electron transfer chain to other processes that generate peroxide is unclear. In plants, algae, and cyanobacteria, reactive oxygen species are also produced during photosynthesis, particularly under conditions of high light intensity. This effect is partly offset by the involvement of carotenoids in photoinhibition, and in algae and cyanobacteria, by large amount of iodide and selenium, which involves these antioxidants reacting with over-reduced forms of the photosynthetic reaction centres to prevent the production of reactive oxygen species.
Examples of bioactive antioxidant compounds
Physiological antioxidants are classified into two broad divisions, depending on whether they are soluble in water (hydrophilic) or in lipids (lipophilic). In general, water-soluble antioxidants react with oxidants in the cell cytosol and the blood plasma, while lipid-soluble antioxidants protect cell membranes from lipid peroxidation. These compounds may be synthesized in the body or obtained from the diet. The different antioxidants are present at a wide range of concentrations in body fluids and tissues, with some such as glutathione or ubiquinone mostly present within cells, while others such as uric acid are more systemically distributed (see table below). Some antioxidants are only found in a few organisms, and can be pathogens or virulence factors.
The interactions between these different antioxidants may be synergistic and interdependent. The action of one antioxidant may therefore depend on the proper function of other members of the antioxidant system. The amount of protection provided by any one antioxidant will also depend on its concentration, its reactivity towards the particular reactive oxygen species being considered, and the status of the antioxidants with which it interacts.
Some compounds contribute to antioxidant defense by chelating transition metals and preventing them from catalyzing the production of free radicals in the cell. The ability to sequester iron for iron-binding proteins, such as transferrin and ferritin, is one such function. Selenium and zinc are commonly referred to as antioxidant minerals, but these chemical elements have no antioxidant action themselves, but rather are required for the activity of antioxidant enzymes, such as glutathione reductase and superoxide dismutase. (See also selenium in biology and zinc in biology.)
Uric acid
Uric acid (UA) is an antioxidant oxypurine produced from xanthine by the enzyme xanthine oxidase, and is an intermediate product of purine metabolism. In almost all land animals, urate oxidase further catalyzes the oxidation of uric acid to allantoin, but in humans and most higher primates, the urate oxidase gene is nonfunctional, so that UA is not further broken down. The evolutionary reasons for this loss of urate conversion to allantoin remain the topic of active speculation. The antioxidant effects of uric acid have led researchers to suggest this mutation was beneficial to early primates and humans. Studies of high altitude acclimatization support the hypothesis that urate acts as an antioxidant by mitigating the oxidative stress caused by high-altitude hypoxia.
Uric acid has the highest concentration of any blood antioxidant and provides over half of the total antioxidant capacity of human serum. Uric acid's antioxidant activities are also complex, given that it does not react with some oxidants, such as superoxide, but does act against peroxynitrite, peroxides, and hypochlorous acid. Concerns over elevated UA's contribution to gout must be considered one of many risk factors. By itself, UA-related risk of gout at high levels (415–530 μmol/L) is only 0.5% per year with an increase to 4.5% per year at UA supersaturation levels (535+ μmol/L). Many of these aforementioned studies determined UA's antioxidant actions within normal physiological levels, and some found antioxidant activity at levels as high as 285 μmol/L.
Vitamin C
Ascorbic acid or vitamin C is a monosaccharide oxidation-reduction (redox) catalyst found in both animals and plants. As one of the enzymes needed to make ascorbic acid has been lost by mutation during primate evolution, humans must obtain it from their diet; it is therefore a dietary vitamin. Most other animals are able to produce this compound in their bodies and do not require it in their diets. Ascorbic acid is required for the conversion of the procollagen to collagen by oxidizing proline residues to hydroxyproline. In other cells, it is maintained in its reduced form by reaction with glutathione, which can be catalysed by protein disulfide isomerase and glutaredoxins. Ascorbic acid is a redox catalyst which can reduce, and thereby neutralize, reactive oxygen species such as hydrogen peroxide. In addition to its direct antioxidant effects, ascorbic acid is also a substrate for the redox enzyme ascorbate peroxidase, a function that is used in stress resistance in plants. Ascorbic acid is present at high levels in all parts of plants and can reach concentrations of 20 millimolar in chloroplasts.
Glutathione
Glutathione is a cysteine-containing peptide found in most forms of aerobic life. It is not required in the diet and is instead synthesized in cells from its constituent amino acids. Glutathione has antioxidant properties since the thiol group in its cysteine moiety is a reducing agent and can be reversibly oxidized and reduced. In cells, glutathione is maintained in the reduced form by the enzyme glutathione reductase and in turn reduces other metabolites and enzyme systems, such as ascorbate in the glutathione-ascorbate cycle, glutathione peroxidases and glutaredoxins, as well as reacting directly with oxidants. Due to its high concentration and its central role in maintaining the cell's redox state, glutathione is one of the most important cellular antioxidants. In some organisms glutathione is replaced by other thiols, such as by mycothiol in the Actinomycetes, bacillithiol in some gram-positive bacteria, or by trypanothione in the Kinetoplastids.
Vitamin E
Vitamin E is the collective name for a set of eight related tocopherols and tocotrienols, which are fat-soluble vitamins with antioxidant properties. Of these, α-tocopherol has been most studied as it has the highest bioavailability, with the body preferentially absorbing and metabolising this form.
It has been claimed that the α-tocopherol form is the most important lipid-soluble antioxidant, and that it protects membranes from oxidation by reacting with lipid radicals produced in the lipid peroxidation chain reaction. This removes the free radical intermediates and prevents the propagation reaction from continuing. This reaction produces oxidised α-tocopheroxyl radicals that can be recycled back to the active reduced form through reduction by other antioxidants, such as ascorbate, retinol or ubiquinol. This is in line with findings showing that α-tocopherol, but not water-soluble antioxidants, efficiently protects glutathione peroxidase 4 (GPX4)-deficient cells from cell death. GPx4 is the only known enzyme that efficiently reduces lipid-hydroperoxides within biological membranes.
However, the roles and importance of the various forms of vitamin E are presently unclear, and it has even been suggested that the most important function of α-tocopherol is as a signaling molecule, with this molecule having no significant role in antioxidant metabolism. The functions of the other forms of vitamin E are even less well understood, although γ-tocopherol is a nucleophile that may react with electrophilic mutagens, and tocotrienols may be important in protecting neurons from damage.
Pro-oxidant activities
Antioxidants that are reducing agents can also act as pro-oxidants. For example, vitamin C has antioxidant activity when it reduces oxidizing substances such as hydrogen peroxide; however, it will also reduce metal ions such as iron and copper that generate free radicals through the Fenton reaction. While ascorbic acid is effective antioxidant, it can also oxidatively change the flavor and color of food. With the presence of transition metals, there are low concentrations of ascorbic acid that can act as a radical scavenger in the Fenton reaction.
2 Fe3+ + Ascorbate → 2 Fe2+ + Dehydroascorbate
2 Fe2+ + 2 H2O2 → 2 Fe3+ + 2 OH· + 2 OH−
The relative importance of the antioxidant and pro-oxidant activities of antioxidants is an area of current research, but vitamin C, which exerts its effects as a vitamin by oxidizing polypeptides, appears to have a mostly antioxidant action in the human body.
Enzyme systems
As with the chemical antioxidants, cells are protected against oxidative stress by an interacting network of antioxidant enzymes. Here, the superoxide released by processes such as oxidative phosphorylation is first converted to hydrogen peroxide and then further reduced to give water. This detoxification pathway is the result of multiple enzymes, with superoxide dismutases catalysing the first step and then catalases and various peroxidases removing hydrogen peroxide. As with antioxidant metabolites, the contributions of these enzymes to antioxidant defenses can be hard to separate from one another, but the generation of transgenic mice lacking just one antioxidant enzyme can be informative.
Superoxide dismutase, catalase, and peroxiredoxins
Superoxide dismutases (SODs) are a class of closely related enzymes that catalyze the breakdown of the superoxide anion into oxygen and hydrogen peroxide. SOD enzymes are present in almost all aerobic cells and in extracellular fluids. Superoxide dismutase enzymes contain metal ion cofactors that, depending on the isozyme, can be copper, zinc, manganese or iron. In humans, the copper/zinc SOD is present in the cytosol, while manganese SOD is present in the mitochondrion. There also exists a third form of SOD in extracellular fluids, which contains copper and zinc in its active sites. The mitochondrial isozyme seems to be the most biologically important of these three, since mice lacking this enzyme die soon after birth. In contrast, the mice lacking copper/zinc SOD (Sod1) are viable but have numerous pathologies and a reduced lifespan (see article on superoxide), while mice without the extracellular SOD have minimal defects (sensitive to hyperoxia). In plants, SOD isozymes are present in the cytosol and mitochondria, with an iron SOD found in chloroplasts that is absent from vertebrates and yeast.
Catalases are enzymes that catalyse the conversion of hydrogen peroxide to water and oxygen, using either an iron or manganese cofactor. This protein is localized to peroxisomes in most eukaryotic cells. Catalase is an unusual enzyme since, although hydrogen peroxide is its only substrate, it follows a ping-pong mechanism. Here, its cofactor is oxidised by one molecule of hydrogen peroxide and then regenerated by transferring the bound oxygen to a second molecule of substrate. Despite its apparent importance in hydrogen peroxide removal, humans with genetic deficiency of catalase — "acatalasemia" — or mice genetically engineered to lack catalase completely, experience few ill effects.
Peroxiredoxins are peroxidases that catalyze the reduction of hydrogen peroxide, organic hydroperoxides, as well as peroxynitrite. They are divided into three classes: typical 2-cysteine peroxiredoxins; atypical 2-cysteine peroxiredoxins; and 1-cysteine peroxiredoxins. These enzymes share the same basic catalytic mechanism, in which a redox-active cysteine (the peroxidatic cysteine) in the active site is oxidized to a sulfenic acid by the peroxide substrate. Over-oxidation of this cysteine residue in peroxiredoxins inactivates these enzymes, but this can be reversed by the action of sulfiredoxin. Peroxiredoxins seem to be important in antioxidant metabolism, as mice lacking peroxiredoxin 1 or 2 have shortened lifespans and develop hemolytic anaemia, while plants use peroxiredoxins to remove hydrogen peroxide generated in chloroplasts.
Thioredoxin and glutathione systems
The thioredoxin system contains the 12-kDa protein thioredoxin and its companion thioredoxin reductase. Proteins related to thioredoxin are present in all sequenced organisms. Plants, such as Arabidopsis thaliana, have a particularly great diversity of isoforms. The active site of thioredoxin consists of two neighboring cysteines, as part of a highly conserved CXXC motif, that can cycle between an active dithiol form (reduced) and an oxidized disulfide form. In its active state, thioredoxin acts as an efficient reducing agent, scavenging reactive oxygen species and maintaining other proteins in their reduced state. After being oxidized, the active thioredoxin is regenerated by the action of thioredoxin reductase, using NADPH as an electron donor.
The glutathione system includes glutathione, glutathione reductase, glutathione peroxidases, and glutathione S-transferases. This system is found in animals, plants and microorganisms. Glutathione peroxidase is an enzyme containing four selenium-cofactors that catalyzes the breakdown of hydrogen peroxide and organic hydroperoxides. There are at least four different glutathione peroxidase isozymes in animals. Glutathione peroxidase 1 is the most abundant and is a very efficient scavenger of hydrogen peroxide, while glutathione peroxidase 4 is most active with lipid hydroperoxides. Surprisingly, glutathione peroxidase 1 is dispensable, as mice lacking this enzyme have normal lifespans, but they are hypersensitive to induced oxidative stress. In addition, the glutathione S-transferases show high activity with lipid peroxides. These enzymes are at particularly high levels in the liver and also serve in detoxification metabolism.
Health research
Relation to diet
The dietary antioxidant vitamins A, C, and E are essential and required in specific daily amounts to prevent diseases. Polyphenols, which have antioxidant properties in vitro due to their free hydroxy groups, are extensively metabolized by catechol-O-methyltransferase which methylates free hydroxyl groups, and thereby prevents them from acting as antioxidants in vivo.
Interactions
Common pharmaceuticals (and supplements) with antioxidant properties may interfere with the efficacy of certain anticancer medication and radiation therapy. Pharmaceuticals and supplements that have antioxidant properties suppress the formation of free radicals by inhibiting oxidation processes. Radiation therapy induce oxidative stress that damages essential components of cancer cells, such as proteins, nucleic acids, and lipids that comprise cell membranes.
Adverse effects
Relatively strong reducing acids can have antinutrient effects by binding to dietary minerals such as iron and zinc in the gastrointestinal tract and preventing them from being absorbed. Examples are oxalic acid, tannins and phytic acid, which are high in plant-based diets. Calcium and iron deficiencies are not uncommon in diets in developing countries where less meat is eaten and there is high consumption of phytic acid from beans and unleavened whole grain bread. However, germination, soaking, or microbial fermentation are all household strategies that reduce the phytate and polyphenol content of unrefined cereal. Increases in Fe, Zn and Ca absorption have been reported in adults fed dephytinized cereals compared with cereals containing their native phytate.
High doses of some antioxidants may have harmful long-term effects. The Beta-Carotene and Retinol Efficacy Trial (CARET) study of lung cancer patients found that smokers given supplements containing beta-carotene and vitamin A had increased rates of lung cancer. Subsequent studies confirmed these adverse effects. These harmful effects may also be seen in non-smokers, as one meta-analysis including data from approximately 230,000 patients showed that β-carotene, vitamin A or vitamin E supplementation is associated with increased mortality, but saw no significant effect from vitamin C. No health risk was seen when all the randomized controlled studies were examined together, but an increase in mortality was detected when only high-quality and low-bias risk trials were examined separately. As the majority of these low-bias trials dealt with either elderly people, or people with disease, these results may not apply to the general population. This meta-analysis was later repeated and extended by the same authors, confirming the previous results. These two publications are consistent with some previous meta-analyses that also suggested that vitamin E supplementation increased mortality, and that antioxidant supplements increased the risk of colon cancer. Beta-carotene may also increase lung cancer. Overall, the large number of clinical trials carried out on antioxidant supplements suggest that either these products have no effect on health, or that they cause a small increase in mortality in elderly or vulnerable populations.
Exercise and muscle soreness
A 2017 review showed that taking antioxidant dietary supplements before or after exercise does not likely lead to a noticeable reduction in muscle soreness after a person exercises.
Levels in food
Antioxidant vitamins are found in vegetables, fruits, eggs, legumes and nuts. Vitamins A, C, and E can be destroyed by long-term storage or prolonged cooking. The effects of cooking and food processing are complex, as these processes can also increase the bioavailability of antioxidants, such as some carotenoids in vegetables. Processed food contains fewer antioxidant vitamins than fresh and uncooked foods, as preparation exposes food to heat and oxygen.
Other antioxidants are not obtained from the diet, but instead are made in the body. For example, ubiquinol (coenzyme Q) is poorly absorbed from the gut and is made through the mevalonate pathway. Another example is glutathione, which is made from amino acids. As any glutathione in the gut is broken down to free cysteine, glycine and glutamic acid before being absorbed, even large oral intake has little effect on the concentration of glutathione in the body. Although large amounts of sulfur-containing amino acids such as acetylcysteine can increase glutathione, no evidence exists that eating high levels of these glutathione precursors is beneficial for healthy adults.
Measurement and invalidation of ORAC
Measurement of polyphenol and carotenoid content in food is not a straightforward process, as antioxidants collectively are a diverse group of compounds with different reactivities to various reactive oxygen species. In food science analyses in vitro, the oxygen radical absorbance capacity (ORAC) was once an industry standard for estimating antioxidant strength of whole foods, juices and food additives, mainly from the presence of polyphenols. Earlier measurements and ratings by the United States Department of Agriculture were withdrawn in 2012 as biologically irrelevant to human health, referring to an absence of physiological evidence for polyphenols having antioxidant properties in vivo. Consequently, the ORAC method, derived only from in vitro experiments, is no longer considered relevant to human diets or biology, as of 2010.
Alternative in vitro measurements of antioxidant content in foods – also based on the presence of polyphenols – include the Folin-Ciocalteu reagent, and the Trolox equivalent antioxidant capacity assay.
References
Further reading
External links
Anti-aging substances
Physiology
Process chemicals
Redox
|
https://en.wikipedia.org/wiki/Brass
|
Brass is an alloy of copper (Cu) and zinc (Zn), in proportions which can be varied to achieve different colours and mechanical, electrical, acoustic, and chemical properties, but copper typically has the larger proportion. In use since prehistoric times, it is a substitutional alloy: atoms of the two constituents may replace each other within the same crystal structure.
Brass is similar to bronze, another copper alloy that uses tin instead of zinc. Both bronze and brass may include small proportions of a range of other elements including arsenic (As), lead (Pb), phosphorus (P), aluminium (Al), manganese (Mn), and silicon (Si). Historically, the distinction between the two alloys has been less consistent and clear, and increasingly museums use the more general term "copper alloy."
Brass has long been a popular material for its bright gold-like appearance and is still used for drawer pulls and doorknobs. It has also been widely used to make sculpture and utensils because of its low melting point, high workability (both with hand tools and with modern turning and milling machines), durability, and electrical and thermal conductivity. Brasses with higher copper content are softer and more golden in colour; conversely those with less copper and thus more zinc are harder and more silvery in colour.
Brass is still commonly used in applications where corrosion resistance and low friction are required, such as locks, hinges, gears, bearings, ammunition casings, zippers, plumbing, hose couplings, valves, and electrical plugs and sockets. It is used extensively for musical instruments such as horns and bells. The composition of brass, generally 66% copper and 34% zinc, makes it a favorable substitute for copper in costume jewelry and fashion jewelry, as it exhibits greater resistance to corrosion. Brass is not as hard as bronze, and so is not suitable for most weapons and tools. Nor is it suitable for marine uses, because the zinc reacts with minerals in salt water, leaving porous copper behind; marine brass, with added tin, avoids this, as does bronze.
Brass is often used in situations in which it is important that sparks not be struck, such as in fittings and tools used near flammable or explosive materials.
Properties
Brass is more malleable than bronze or zinc. The relatively low melting point of brass (, depending on composition) and its flow characteristics make it a relatively easy material to cast. By varying the proportions of copper and zinc, the properties of the brass can be changed, allowing hard and soft brasses. The density of brass is .
Today, almost 90% of all brass alloys are recycled. Because brass is not ferromagnetic, ferrous scrap can be separated from it by passing the scrap near a powerful magnet. Brass scrap is melted and recast into billets that are extruded into the desired form and size. The general softness of brass means that it can often be machined without the use of cutting fluid, though there are exceptions to this.
Aluminium makes brass stronger and more corrosion-resistant. Aluminium also causes a highly beneficial hard layer of aluminium oxide (Al2O3) to be formed on the surface that is thin, transparent, and self-healing. Tin has a similar effect and finds its use especially in seawater applications (naval brasses). Combinations of iron, aluminium, silicon, and manganese make brass wear- and tear-resistant. The addition of as little as 1% iron to a brass alloy will result in an alloy with a noticeable magnetic attraction.
Brass will corrode in the presence of moisture, chlorides, acetates, ammonia, and certain acids. This often happens when the copper reacts with sulfur to form a brown and eventually black surface layer of copper sulfide which, if regularly exposed to slightly acidic water such as urban rainwater, can then oxidize in air to form a patina of green-blue copper carbonate. Depending on how the patina layer was formed, it may protect the underlying brass from further damage.
Although copper and zinc have a large difference in electrical potential, the resulting brass alloy does not experience internalized galvanic corrosion because of the absence of a corrosive environment within the mixture. However, if brass is placed in contact with a more noble metal such as silver or gold in such an environment, the brass will corrode galvanically; conversely, if brass is in contact with a less-noble metal such as zinc or iron, the less noble metal will corrode and the brass will be protected.
Lead content
To enhance the machinability of brass, lead is often added in concentrations of about 2%. Since lead has a lower melting point than the other constituents of the brass, it tends to migrate towards the grain boundaries in the form of globules as it cools from casting. The pattern the globules form on the surface of the brass increases the available lead surface area which, in turn, affects the degree of leaching. In addition, cutting operations can smear the lead globules over the surface. These effects can lead to significant lead leaching from brasses of comparatively low lead content.
In October 1999, the California State Attorney General sued 13 key manufacturers and distributors over lead content. In laboratory tests, state researchers found the average brass key, new or old, exceeded the California Proposition 65 limits by an average factor of 19, assuming handling twice a day. In April 2001 manufacturers agreed to reduce lead content to 1.5%, or face a requirement to warn consumers about lead content. Keys plated with other metals are not affected by the settlement, and may continue to use brass alloys with a higher percentage of lead content.
Also in California, lead-free materials must be used for "each component that comes into contact with the wetted surface of pipes and pipe fittings, plumbing fittings and fixtures". On 1 January 2010, the maximum amount of lead in "lead-free brass" in California was reduced from 4% to 0.25% lead.
Corrosion-resistant brass for harsh environments
Dezincification-resistant (DZR or DR) brasses, sometimes referred to as CR (corrosion resistant) brasses, are used where there is a large corrosion risk and where normal brasses do not meet the requirements. Applications with high water temperatures, chlorides present or deviating water qualities (soft water) play a role. DZR-brass is used in water boiler systems. This brass alloy must be produced with great care, with special attention placed on a balanced composition and proper production temperatures and parameters to avoid long-term failures.
An example of DZR brass is the C352 brass, with about 30% zinc, 61–63% copper, 1.7–2.8% lead, and 0.02–0.15% arsenic. The lead and arsenic significantly suppress the zinc loss.
"Red brasses", a family of alloys with high copper proportion and generally less than 15% zinc, are more resistant to zinc loss. One of the metals called "red brass" is 85% copper, 5% tin, 5% lead, and 5% zinc. Copper alloy C23000, which is also known as "red brass", contains 84–86% copper, 0.05% each iron and lead, with the balance being zinc.
Another such material is gunmetal, from the family of red brasses. Gunmetal alloys contain roughly 88% copper, 8-10% tin, and 2-4% zinc. Lead can be added for ease of machining or for bearing alloys.
"Naval brass", for use in seawater, contains 40% zinc but also 1% tin. The tin addition suppresses zinc leaching.
The NSF International requires brasses with more than 15% zinc, used in piping and plumbing fittings, to be dezincification-resistant.
Use in musical instruments
The high malleability and workability, relatively good resistance to corrosion, and traditionally attributed acoustic properties of brass, have made it the usual metal of choice for construction of musical instruments whose acoustic resonators consist of long, relatively narrow tubing, often folded or coiled for compactness; silver and its alloys, and even gold, have been used for the same reasons, but brass is the most economical choice. Collectively known as brass instruments, these include the trombone, tuba, trumpet, cornet, flugelhorn, baritone horn, euphonium, tenor horn, and French horn, and many other "horns", many in variously sized families, such as the saxhorns.
Other wind instruments may be constructed of brass or other metals, and indeed most modern student-model flutes and piccolos are made of some variety of brass, usually a cupronickel alloy similar to nickel silver (also known as German silver). Clarinets, especially low clarinets such as the contrabass and subcontrabass, are sometimes made of metal because of limited supplies of the dense, fine-grained tropical hardwoods traditionally preferred for smaller woodwinds. For the same reason, some low clarinets, bassoons and contrabassoons feature a hybrid construction, with long, straight sections of wood, and curved joints, neck, and/or bell of metal. The use of metal also avoids the risks of exposing wooden instruments to changes in temperature or humidity, which can cause sudden cracking. Even though the saxophones and sarrusophones are classified as woodwind instruments, they are normally made of brass for similar reasons, and because their wide, conical bores and thin-walled bodies are more easily and efficiently made by forming sheet metal than by machining wood.
The keywork of most modern woodwinds, including wooden-bodied instruments, is also usually made of an alloy such as nickel silver. Such alloys are stiffer and more durable than the brass used to construct the instrument bodies, but still workable with simple hand tools—a boon to quick repairs. The mouthpieces of both brass instruments and, less commonly, woodwind instruments are often made of brass among other metals as well.
Next to the brass instruments, the most notable use of brass in music is in various percussion instruments, most notably cymbals, gongs, and orchestral (tubular) bells (large "church" bells are normally made of bronze). Small handbells and "jingle bells" are also commonly made of brass.
The harmonica is a free reed aerophone, also often made from brass. In organ pipes of the reed family, brass strips (called tongues) are used as the reeds, which beat against the shallot (or beat "through" the shallot in the case of a "free" reed). Although not part of the brass section, snare drums are also sometimes made of brass. Some parts on electric guitars are also made from brass, especially inertia blocks on tremolo systems for its tonal properties, and for string nuts and saddles for both tonal properties and its low friction.
Germicidal and antimicrobial applications
The bactericidal properties of brass have been observed for centuries, particularly in marine environments where it prevents biofouling. Depending upon the type and concentration of pathogens and the medium they are in, brass kills these microorganisms within a few minutes to hours of contact.
A large number of independent studies confirm this antimicrobial effect, even against antibiotic-resistant bacteria such as MRSA and VRSA. The mechanisms of antimicrobial action by copper and its alloys, including brass, are a subject of intense and ongoing investigation.
Season cracking
Brass is susceptible to stress corrosion cracking, especially from ammonia or substances containing or releasing ammonia. The problem is sometimes known as season cracking after it was first discovered in brass cartridges used for rifle ammunition during the 1920s in the British Indian Army. The problem was caused by high residual stresses from cold forming of the cases during manufacture, together with chemical attack from traces of ammonia in the atmosphere. The cartridges were stored in stables and the ammonia concentration rose during the hot summer months, thus initiating brittle cracks. The problem was resolved by annealing the cases, and storing the cartridges elsewhere.
Types
Other phases than α, β and γ are ε, a hexagonal intermetallic CuZn3, and η, a solid solution of copper in zinc.
Brass alloys
History
Although forms of brass have been in use since prehistory, its true nature as a copper-zinc alloy was not understood until the post-medieval period because the zinc vapor which reacted with copper to make brass was not recognized as a metal. The King James Bible makes many references to "brass" to translate "nechosheth" (bronze or copper) from Hebrew to English. The earliest brasses may have been natural alloys made by smelting zinc-rich copper ores. By the Roman period brass was being deliberately produced from metallic copper and zinc minerals using the cementation process, the product of which was calamine brass, and variations on this method continued until the mid-19th century. It was eventually replaced by speltering, the direct alloying of copper and zinc metal which was introduced to Europe in the 16th century.
Brass has sometimes historically been referred to as "yellow copper".
Early copper-zinc alloys
In West Asia and the Eastern Mediterranean early copper-zinc alloys are now known in small numbers from a number of 3rd millennium BC sites in the Aegean, Iraq, the United Arab Emirates, Kalmykia, Turkmenistan and Georgia and from 2nd millennium BC sites in western India, Uzbekistan, Iran, Syria, Iraq and Canaan. Isolated examples of copper-zinc alloys are known in China from the 1st century AD, long after bronze was widely used.
The compositions of these early "brass" objects are highly variable and most have zinc contents of between 5% and 15% wt which is lower than in brass produced by cementation. These may be "natural alloys" manufactured by smelting zinc rich copper ores in redox conditions. Many have similar tin contents to contemporary bronze artefacts and it is possible that some copper-zinc alloys were accidental and perhaps not even distinguished from copper. However the large number of copper-zinc alloys now known suggests that at least some were deliberately manufactured and many have zinc contents of more than 12% wt which would have resulted in a distinctive golden colour.
By the 8th–7th century BC Assyrian cuneiform tablets mention the exploitation of the "copper of the mountains" and this may refer to "natural" brass. "Oreikhalkon" (mountain copper), the Ancient Greek translation of this term, was later adapted to the Latin aurichalcum meaning "golden copper" which became the standard term for brass. In the 4th century BC Plato knew orichalkos as rare and nearly as valuable as gold and Pliny describes how aurichalcum had come from Cypriot ore deposits which had been exhausted by the 1st century AD. X-ray fluorescence analysis of 39 orichalcum ingots recovered from a 2,600-year-old shipwreck off Sicily found them to be an alloy made with 75–80% copper, 15–20% zinc and small percentages of nickel, lead and iron.
Roman world
During the later part of first millennium BC the use of brass spread across a wide geographical area from Britain and Spain in the west to Iran, and India in the east. This seems to have been encouraged by exports and influence from the Middle East and eastern Mediterranean where deliberate production of brass from metallic copper and zinc ores had been introduced. The 4th century BC writer Theopompus, quoted by Strabo, describes how heating earth from Andeira in Turkey produced "droplets of false silver", probably metallic zinc, which could be used to turn copper into oreichalkos. In the 1st century BC the Greek Dioscorides seems to have recognized a link between zinc minerals and brass describing how Cadmia (zinc oxide) was found on the walls of furnaces used to heat either zinc ore or copper and explaining that it can then be used to make brass.
By the first century BC brass was available in sufficient supply to use as coinage in Phrygia and Bithynia, and after the Augustan currency reform of 23 BC it was also used to make Roman dupondii and sestertii. The uniform use of brass for coinage and military equipment across the Roman world may indicate a degree of state involvement in the industry, and brass even seems to have been deliberately boycotted by Jewish communities in Palestine because of its association with Roman authority.
Brass was produced by the cementation process where copper and zinc ore are heated together until zinc vapor is produced which reacts with the copper. There is good archaeological evidence for this process and crucibles used to produce brass by cementation have been found on Roman period sites including Xanten and Nidda in Germany, Lyon in France and at a number of sites in Britain. They vary in size from tiny acorn sized to large amphorae like vessels but all have elevated levels of zinc on the interior and are lidded. They show no signs of slag or metal prills suggesting that zinc minerals were heated to produce zinc vapor which reacted with metallic copper in a solid state reaction. The fabric of these crucibles is porous, probably designed to prevent a buildup of pressure, and many have small holes in the lids which may be designed to release pressure or to add additional zinc minerals near the end of the process. Dioscorides mentioned that zinc minerals were used for both the working and finishing of brass, perhaps suggesting secondary additions.
Brass made during the early Roman period seems to have varied between 20% and 28% wt zinc. The high content of zinc in coinage and brass objects declined after the first century AD and it has been suggested that this reflects zinc loss during recycling and thus an interruption in the production of new brass. However it is now thought this was probably a deliberate change in composition and overall the use of brass increases over this period making up around 40% of all copper alloys used in the Roman world by the 4th century AD.
Medieval period
Little is known about the production of brass during the centuries immediately after the collapse of the Roman Empire. Disruption in the trade of tin for bronze from Western Europe may have contributed to the increasing popularity of brass in the east and by the 6th–7th centuries AD over 90% of copper alloy artefacts from Egypt were made of brass. However other alloys such as low tin bronze were also used and they vary depending on local cultural attitudes, the purpose of the metal and access to zinc, especially between the Islamic and Byzantine world. Conversely the use of true brass seems to have declined in Western Europe during this period in favor of gunmetals and other mixed alloys but by about 1000 brass artefacts are found in Scandinavian graves in Scotland, brass was being used in the manufacture of coins in Northumbria and there is archaeological and historical evidence for the production of calamine brass in Germany and the Low Countries, areas rich in calamine ore.
These places would remain important centres of brass making throughout the Middle Ages period, especially Dinant. Brass objects are still collectively known as dinanderie in French. The baptismal font at St Bartholomew's Church, Liège in modern Belgium (before 1117) is an outstanding masterpiece of Romanesque brass casting, though also often described as bronze. The metal of the early 12th-century Gloucester Candlestick is unusual even by medieval standards in being a mixture of copper, zinc, tin, lead, nickel, iron, antimony and arsenic with an unusually large amount of silver, ranging from 22.5% in the base to 5.76% in the pan below the candle. The proportions of this mixture may suggest that the candlestick was made from a hoard of old coins, probably Late Roman. Latten is a term for medieval alloys of uncertain and often variable composition often covering decorative borders and similar objects cut from sheet metal, whether of brass or bronze. Especially in Tibetan art, analysis of some objects shows very different compositions from different ends of a large piece. Aquamaniles were typically made in brass in both the European and Islamic worlds.
The cementation process continued to be used but literary sources from both Europe and the Islamic world seem to describe variants of a higher temperature liquid process which took place in open-topped crucibles. Islamic cementation seems to have used zinc oxide known as tutiya or tutty rather than zinc ores for brass-making, resulting in a metal with lower iron impurities. A number of Islamic writers and the 13th century Italian Marco Polo describe how this was obtained by sublimation from zinc ores and condensed onto clay or iron bars, archaeological examples of which have been identified at Kush in Iran. It could then be used for brass making or medicinal purposes. In 10th century Yemen al-Hamdani described how spreading al-iglimiya, probably zinc oxide, onto the surface of molten copper produced tutiya vapor which then reacted with the metal. The 13th century Iranian writer al-Kashani describes a more complex process whereby tutiya was mixed with raisins and gently roasted before being added to the surface of the molten metal. A temporary lid was added at this point presumably to minimize the escape of zinc vapor.
In Europe a similar liquid process in open-topped crucibles took place which was probably less efficient than the Roman process and the use of the term tutty by Albertus Magnus in the 13th century suggests influence from Islamic technology. The 12th century German monk Theophilus described how preheated crucibles were one sixth filled with powdered calamine and charcoal then topped up with copper and charcoal before being melted, stirred then filled again. The final product was cast, then again melted with calamine. It has been suggested that this second melting may have taken place at a lower temperature to allow more zinc to be absorbed. Albertus Magnus noted that the "power" of both calamine and tutty could evaporate and described how the addition of powdered glass could create a film to bind it to the metal.
German brass making crucibles are known from Dortmund dating to the 10th century AD and from Soest and Schwerte in Westphalia dating to around the 13th century confirm Theophilus' account, as they are open-topped, although ceramic discs from Soest may have served as loose lids which may have been used to reduce zinc evaporation, and have slag on the interior resulting from a liquid process.
Africa
Some of the most famous objects in African art are the lost wax castings of West Africa, mostly from what is now Nigeria, produced first by the Kingdom of Ife and then the Benin Empire. Though normally described as "bronzes", the Benin Bronzes, now mostly in the British Museum and other Western collections, and the large portrait heads such as the Bronze Head from Ife of "heavily leaded zinc-brass" and the Bronze Head of Queen Idia, both also British Museum, are better described as brass, though of variable compositions. Work in brass or bronze continued to be important in Benin art and other West African traditions such as Akan goldweights, where the metal was regarded as a more valuable material than in Europe.
Renaissance and post-medieval Europe
The Renaissance saw important changes to both the theory and practice of brassmaking in Europe. By the 15th century there is evidence for the renewed use of lidded cementation crucibles at Zwickau in Germany. These large crucibles were capable of producing c.20 kg of brass. There are traces of slag and pieces of metal on the interior. Their irregular composition suggests that this was a lower temperature, not entirely liquid, process. The crucible lids had small holes which were blocked with clay plugs near the end of the process presumably to maximize zinc absorption in the final stages. Triangular crucibles were then used to melt the brass for casting.
16th-century technical writers such as Biringuccio, Ercker and Agricola described a variety of cementation brass making techniques and came closer to understanding the true nature of the process noting that copper became heavier as it changed to brass and that it became more golden as additional calamine was added. Zinc metal was also becoming more commonplace. By 1513 metallic zinc ingots from India and China were arriving in London and pellets of zinc condensed in furnace flues at the Rammelsberg in Germany were exploited for cementation brass making from around 1550.
Eventually it was discovered that metallic zinc could be alloyed with copper to make brass, a process known as speltering, and by 1657 the German chemist Johann Glauber had recognized that calamine was "nothing else but unmeltable zinc" and that zinc was a "half ripe metal". However some earlier high zinc, low iron brasses such as the 1530 Wightman brass memorial plaque from England may have been made by alloying copper with zinc and include traces of cadmium similar to those found in some zinc ingots from China.
However, the cementation process was not abandoned, and as late as the early 19th century there are descriptions of solid-state cementation in a domed furnace at around 900–950 °C and lasting up to 10 hours. The European brass industry continued to flourish into the post medieval period buoyed by innovations such as the 16th century introduction of water powered hammers for the production of wares such as pots. By 1559 the Germany city of Aachen alone was capable of producing 300,000 cwt of brass per year. After several false starts during the 16th and 17th centuries the brass industry was also established in England taking advantage of abundant supplies of cheap copper smelted in the new coal fired reverberatory furnace. In 1723 Bristol brass maker Nehemiah Champion patented the use of granulated copper, produced by pouring molten metal into cold water. This increased the surface area of the copper helping it react and zinc contents of up to 33% wt were reported using this new technique.
In 1738 Nehemiah's son William Champion patented a technique for the first industrial scale distillation of metallic zinc known as distillation per descencum or "the English process". This local zinc was used in speltering and allowed greater control over the zinc content of brass and the production of high-zinc copper alloys which would have been difficult or impossible to produce using cementation, for use in expensive objects such as scientific instruments, clocks, brass buttons and costume jewelry. However Champion continued to use the cheaper calamine cementation method to produce lower-zinc brass and the archaeological remains of bee-hive shaped cementation furnaces have been identified at his works at Warmley. By the mid-to-late 18th century developments in cheaper zinc distillation such as John-Jaques Dony's horizontal furnaces in Belgium and the reduction of tariffs on zinc as well as demand for corrosion-resistant high zinc alloys increased the popularity of speltering and as a result cementation was largely abandoned by the mid-19th century.
See also
Brass bed
Brass rubbing
List of copper alloys
Citations
General references
Bayley, J. (1990). "The Production of Brass in Antiquity with Particular Reference to Roman Britain". In Craddock, P. T. (ed.). 2000 Years of Zinc and Brass. London: British Museum.
Craddock, P. T. and Eckstein, K (2003). "Production of Brass in Antiquity by Direct Reduction". In Craddock, P. T. and Lang, J. (eds.). Mining and Metal Production Through the Ages. London: British Museum.
Day, J. (1990). "Brass and Zinc in Europe from the Middle Ages until the 19th century". In Craddock, P. T. (ed.). 2000 Years of Zinc and Brass. London: British Museum.
Day, J. (1991). "Copper, Zinc and Brass Production". In Day, J. and Tylecote, R. F. (eds.). The Industrial Revolution in Metals. London: The Institute of Metals.
Rehren, T. and Martinon Torres, M. (2008) "Naturam ars imitate: European brassmaking between craft and science". In Martinon-Torres, M. and Rehren, T. (eds.). Archaeology, History and Science: Integrating Approaches to Ancient Material. Left Coast Press.
External links
Copper alloys
History of metallurgy
Zinc alloys
|
https://en.wikipedia.org/wiki/Byte
|
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol () refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness. The first bit is number 0, making the eighth bit number 7.
The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as syllables or slab, before the term byte became common.
The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively.
The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). Internationally, the unit octet, symbol o, explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte".
Etymology and history
The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction.
It is a deliberate respelling of bite to avoid accidental mutation to bit.
Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31.
Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations used in earlier card punches.
The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different.
In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines. These used the eight-bit μ-law encoding. This large investment promised to reduce transmission costs for eight-bit data.
In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote a unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits". He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to the former sense of the word, harking back to the days when bytes were not yet standardized."
The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit.
The term octet is used to unambiguously specify a size of eight bits. It is used extensively in protocol definitions.
Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers.
Unit symbol
The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format as the upper-case character B.
In the International System of Quantities (ISQ), B is also the symbol of the bel, a unit of logarithmic power ratio named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates.
The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French and Romanian, and is also combined with metric prefixes for multiples, for example ko and Mo.
Multiple-byte units
More than one system exists to define unit multiples based on the byte. Some systems are based on powers of 10, following the International System of Units (SI), which defines for example the prefix kilo as 1000 (103); other systems are based on powers of 2. Nomenclature for these systems has confusion. Systems based on powers of 10 use standard SI prefixes (kilo, mega, giga, ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes (kibi, mebi, gibi, ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use the prefixes K, M, and G, creating ambiguity when the prefixes M or G are used.
While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based teribyte.
Units based on powers of 10
Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB) is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 10008 bytes. The additional prefixes ronna- for 10009 and quetta- for 100010 were adopted by the International Bureau of Weights and Measures (BIPM) in 2022.
This definition is most commonly used for data-rate units in computer networks, internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media, particularly hard drives, flash-based storage, and DVDs. Operating systems that use this definition include macOS, iOS, Ubuntu, and Debian. It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance.
Units based on powers of 2
A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 210) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies (BIPM, IEC, NIST). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 10248 bytes. The natural binary counterparts to ronna- and quetta- were given in a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) as robi- (Ri, 10249) and quebi- (Qi, 102410), but have not yet been adopted by the IEC and ISO.
An alternate system of nomenclature for the same units (referred to here as the customary convention), in which 1 kilobyte (KB) is equal to 1,024 bytes, 1 megabyte (MB) is equal to 10242 bytes and 1 gigabyte (GB) is equal to 10243 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. The customary convention is used by the Microsoft Windows operating system and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone, AT&T, Orange and Telstra.
For storage capacity, the customary convention was used by macOS and iOS through Mac OS X 10.6 Snow Leopard and iOS 10, after which they switched to units based on powers of 10.
Parochial units
Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for the same term even within a single vendor. These terms include double word, half word, long word, quad word, slab, superword and syllable. There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for .
History of the conflicting definitions
Contemporary computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as a convenience, because 1,024 is approximately 1,000. This definition was popular in early decades of personal computing, with products like the Tandon 5-inch DD floppy format (holding 368,640 bytes) being advertised as "360 KB", following the 1,024-byte convention. It was not universal, however. The Shugart SA-400 5-inch floppy disk held 109,375 bytes unformatted, and was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC RX01 floppy (1975) held 256,256 bytes formatted, and was advertised as "256k". Other disks were advertised using a mixture of the two definitions: notably, -inch HD disks advertised as "1.44 MB" in fact have a capacity of 1,440 KiB, the equivalent of 1.47 MB or 1.41 MiB.
In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary).
In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. Thus one kibibyte (1 KiB) is 10241 bytes = 1024 bytes, one mebibyte (1 MiB) is 10242 bytes = bytes, and so on.
In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" (KKB).
Modern standard definitions
The IEC adopted the IUPAC proposal and published the standard in January 1999. The IEC prefixes are part of the International System of Quantities. The IEC further specified that the kilobyte should only be used to refer to 1,000 bytes.
Lawsuits over definition
Lawsuits arising from alleged consumer confusion over the binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (109) bytes (the decimal definition), rather than the binary definition (230, i.e., 1,073,741,824). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state.
Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled.
Practical examples
Common uses
Many programming languages define the data type byte.
The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. In addition, the C and C++ standards require that there are no gaps between two bytes. This means every bit in memory is part of a byte.
Java's primitive data type byte is defined as eight bits. It is a signed data type, holding values from −128 to 127.
.NET programming languages, such as C#, define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127, respectively.
In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. A transmission unit might additionally include start bits, stop bits, and parity bits, and thus its size may vary from seven to twelve bits to contain a single seven-bit ASCII code.
See also
Data
Data hierarchy
Nibble
Octet (computing)
Primitive data type
Tryte
Word (computer architecture)
Notes
References
Further reading
Ashley Taylor. "Bits and Bytes." Stanford. https://web.stanford.edu/class/cs101/bits-bytes.html
Data types
Units of information
Binary arithmetic
Computer memory
Data unit
Primitive types
1950s neologisms
8 (number)
|
https://en.wikipedia.org/wiki/Bead
|
A bead is a small, decorative object that is formed in a variety of shapes and sizes of a material such as stone, bone, shell, glass, plastic, wood, or pearl and with a small hole for threading or stringing. Beads range in size from under to over in diameter.
Beads represent some of the earliest forms of jewellery, with a pair of beads made from Nassarius sea snail shells dating to approximately 100,000 years ago thought to be the earliest known example. Beadwork is the art or craft of making things with beads. Beads can be woven together with specialized thread, strung onto thread or soft, flexible wire, or adhered to a surface (e.g. fabric, clay).
Types of beads
Beads can be divided into several types of overlapping categories based on different criteria such as the materials from which they are made, the process used in their manufacturing, the place or period of origin, the patterns on their surface, or their general shape. In some cases, such as millefiori and cloisonné beads, multiple categories may overlap in an interdependent fashion.
Components
Beads can be made of many different materials. The earliest beads were made of a variety of natural materials which, after they were gathered, could be readily drilled and shaped. As humans became capable of obtaining and working with more difficult materials, those materials were added to the range of available substances.
In modern manufacturing, the most common bead materials are wood, plastic, glass, metal, and stone.
Natural materials
Beads are still made from many naturally occurring materials, both organic (i.e., of animal- or plant-based origin) and inorganic (purely mineral origin). However, some of these materials now routinely undergo some extra processing beyond mere shaping and drilling such as color enhancement via dyes or irradiation.
The natural organics include bone, coral, horn, ivory, seeds (such as tagua nuts), animal shell, and wood. For most of human history pearls were the ultimate precious beads of natural origin because of their rarity; the modern pearl-culturing process has made them far more common. Amber and jet are also of natural organic origin although both are the result of partial fossilization.
The natural inorganics include various types of stones, ranging from gemstones to common minerals, and metals. Of the latter, only a few precious metals occur in pure forms, but other purified base metals may as well be placed in this category along with certain naturally occurring alloys such as electrum.
Synthetic materials
The oldest-surviving synthetic materials used for bead making have generally been ceramics: pottery and glass. Beads were also made from ancient alloys such as bronze and brass, but as those were more vulnerable to oxidation they have generally been less well-preserved at archaeological sites.
Many different subtypes of glass are now used for beadmaking, some of which have their own component-specific names. Lead crystal beads have a high percentage of lead oxide in the glass formula, increasing the refractive index. Most of the other named glass types have their formulations and patterns inseparable from the manufacturing process.
Small, colorful, fusible plastic beads can be placed on a solid plastic-backed peg array to form designs and then melted together with a clothes iron; alternatively, they can be strung into necklaces and bracelets or woven into keychains. Fusible beads come in many colors and degrees of transparency/opacity, including varieties that glow in the dark or have internal glitter; peg boards come in various shapes and several geometric patterns. Plastic toy beads, made by chopping plastic tubes into short pieces, were introduced in 1958 by Munkplast AB in Munka-Ljungby, Sweden. Known as Indian beads, they were originally sewn together to form ribbons. The pegboard for bead designs was invented in the early 1960s (patented 1962, patent granted 1967) by Gunnar Knutsson in Vällingby, Sweden, as a therapy for elderly homes; the pegboard later gained popularity as a toy for children. The bead designs were glued to cardboard or Masonite boards and used as trivets. Later, when the beads were made of polyethylene, it became possible to fuse them with a flat iron. Hama come in three sizes: mini (diameter ), midi () and maxi (). Perler beads come in two sizes called classic (5mm) and biggie (10mm). Pyssla beads (by IKEA) only come in one size (5mm).
Manufacturing
Modern mass-produced beads are generally shaped by carving or casting, depending on the material and desired effect. In some cases, more specialized metalworking or glassworking techniques may be employed, or a combination of multiple techniques and materials may be used such as in cloisonné.
Glassworking
Most glass beads are pressed glass, mass-produced by preparing a molten batch of glass of the desired color and pouring it into molds to form the desired shape. This is also true of most plastic beads.
A smaller and more expensive subset of glass and lead crystal beads are cut into precise faceted shapes on an individual basis. This was once done by hand but has largely been taken over by precision machinery.
"Fire-polished" faceted beads are a less expensive alternative to hand-cut faceted glass or crystal. They derive their name from the second half of a two-part process: first, the glass batch is poured into round bead molds, then they are faceted with a grinding wheel. The faceted beads are then poured onto a tray and briefly reheated just long enough to melt the surface, "polishing" out any minor surface irregularities from the grinding wheel.
Specialized glass techniques and types
There are several specialized glassworking techniques that create a distinctive appearance throughout the body of the resulting beads, which are then primarily referred to by the glass type.
If the glass batch is used to create a large massive block instead of pre-shaping it as it cools, the result may then be carved into smaller items in the same manner as stone. Conversely, glass artisans may make beads by lampworking the glass on an individual basis; once formed, the beads undergo little or no further shaping after the layers have been properly annealed.
Most of these glass subtypes are some form of fused glass, although goldstone is created by controlling the reductive atmosphere and cooling conditions of the glass batch rather than by fusing separate components together.
Dichroic glass beads incorporate a semitransparent microlayer of metal between two or more layers. Fibre optic glass beads have an eyecatching chatoyant effect across the grain.
There are also several ways to fuse many small glass canes together into a multicolored pattern, resulting in millefiori beads or chevron beads (sometimes called "trade beads"). "Furnace glass" beads encase a multicolored core in a transparent exterior layer which is then annealed in a furnace.
More economically, millefiori beads can also be made by limiting the patterning process to long, narrow canes or rods known as murrine. Thin cross-sections, or "decals", can then be cut from the murrine and fused into the surface of a plain glass bead.
Shapes
Beads can be made in variety of shapes, including the following, as well as tubular and oval-shaped beads.
Round
This is the most common shape of beads that are strung on wire to create necklaces, and bracelets. The shape of the round beads lay together and are pleasing to the eye. Round beads can be made of glass, stone, ceramic, metal, or wood.
Square or cubed
Square beads can be to enhance a necklace design as a spacer however a necklace can be strung with just square beads. The necklaces with square beads are used in Rosary necklaces/prayer necklaces, and wooden or shell ones are made for beachwear.
Hair pipe beads
Elk rib bones were the original material for the long, tubular hair pipe beads. Today these beads are commonly made of bison and water buffalo bones and are popular for breastplates and chokers among Plains Indians. Black variations of these beads are made from the animals' horns.
Seed beads
Seed beads are uniformly shaped spheroidal or tube shaped beads ranging in size from under a millimetre to several millimetres. "Seed bead" is a generic term for any small bead. Usually rounded in shape, seed beads are most commonly used for loom and off-loom bead weaving.
Place or period of origin
African trade beads or slave beads may be antique beads that were manufactured in Europe and used for trade during the colonial period, such as chevron beads; or they may have been made in West Africa by and for Africans, such as Mauritanian Kiffa beads, Ghanaian and Nigerian powder glass beads, or African-made brass beads. Archaeologists have documented that as recently as the late-nineteenth century beads manufactured in Europe continued to accompany exploration of Africa using Indigenous routes into the interior.
Austrian crystal is a generic term for cut lead-crystal beads, based on the location and prestige of the Swarovski firm.
Czech glass beads are made in the Czech Republic, in particular an area called Jablonec nad Nisou. Production of glass beads in the area dates back to the 14th century, though production was depressed under communist rule. Because of this long tradition, their workmanship and quality has an excellent reputation.
Islamic glass beads have been made in a wide geographical and historical range of Islamic cultures. Used and manufactured from medieval Spain and North Africa in the West and to China in the East, they can be identified by recognizable features, including styles and techniques.
Vintage beads, in the collectibles and antique market, refers to items that are at least 25 or more years old. Vintage beads are available in materials that include lucite, plastic, crystal, metal and glass.
Miscellaneous ethnic beads
Tibetan Dzi beads and Rudraksha beads are used to make Buddhist and Hindu rosaries (malas). Magatama are traditional Japanese beads, and cinnabar was often used for making beads in China. Wampum are cylindrical white or purple beads made from quahog or North Atlantic channeled whelk shells by northeastern Native American tribes, such as the Wampanoag and Shinnecock. Job's tears are seed beads popular among southeastern Native American tribes. Heishe are beads made of shells or stones by the Kewa Pueblo people of New Mexico.
Symbolic meaning of beads
In many parts of the world, beads are used for symbolic purposes, for example:
use for prayer or devotion - e.g. rosary beads for Roman Catholics and many other Christians, misbaha for Shia and many other Muslims, japamala/nenju for Hindus, Buddhists, Jains, some Sikhs, Confucianism, Taoists/Daoists, Shinto, etc.
use for anti-tension devices, e.g. Greek komboloi, or worry beads.
use as currency e.g. Aggrey beads from Ghana
use for gaming e.g. owari beads for mankala
History
Beads are thought to be one of the earliest forms of trade between members of the human race. It is believed that bead trading was one of the reasons why humans developed language. Beads are said to have been used and traded for most of human history. The oldest beads found to date were at Blombos Cave, about 72,000 years old, and at Ksar Akil in Lebanon, about 40,000 years old.
Surface patterns
After shaping, glass and crystal beads can have their surface appearance enhanced by etching a translucent frosted layer, applying an additional color layer, or both. Aurora Borealis, or AB, is a surface coating that diffuses light into a rainbow. Other surface coatings are vitrail, moonlight, dorado, satin, star shine, and heliotrope.
Faux beads are beads that are made to look like a more expensive original material, especially in the case of fake pearls and simulated rocks, minerals and gemstones. Precious metals and ivory are also imitated.
Tagua nuts from South America are used as an ivory substitute since the natural ivory trade has been restricted worldwide.
See also
Fly tying#Bead (Spherical brass, tungsten, and glass beads are often used in Fly tying)
Glass beadmaking
Jewelry design
Mardi Gras beads
Murano beads
Pearl
Ultraviolet-sensitive bead
References
Further reading
Beck, Horace (1928) "Classification and Nomenclature of Beads and Pendants." Archaeologia 77. (Reprinted by Shumway Publishers York, PA 1981)
Dubin, Lois Sherr. North American Indian Jewelry and Adornment: From Prehistory to the Present. New York: Harry N. Abrams, 1999: 170–171. .
Dubin, Lois Sherr. The History of Beads: From 100,000 B.C. to the Present, Revised and Expanded Edition. New York: Harry N. Abrams, (2009). .
Beadwork
Craft materials
Jewellery components
|
https://en.wikipedia.org/wiki/Brain
|
The brain (or encephalon) is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. The brain is the largest cluster of neurons in the body and is typically located in the head, usually near organs for special senses such as vision, hearing and olfaction. It is the most specialized and energy-consuming organ in the body, responsible for complex sensory perception, motor control, endocrine regulation and the development of intelligence.
While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates.
In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons, typically communicating with one another via root-like protrusions called dendrites and long fiber-like extensions called axons, which are usually myelinated and carry trains of rapid micro-electric signal pulses called action potentials to target specific recipient cells in other areas of the brain or distant parts of the body. The prefrontal cortex, which controls executive functions, is particularly well developed in humans.
Physiologically, brains exert centralized control over a body's other organs. They act on the rest of the body both by generating patterns of muscle activity and by driving the secretion of chemicals called hormones. This centralized control allows rapid and coordinated responses to changes in the environment. Some basic types of responsiveness such as reflexes can be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful control of behavior based on complex sensory input requires the information integrating capabilities of a centralized brain.
The operations of individual brain cells are now understood in considerable detail but the way they cooperate in ensembles of millions is yet to be solved. Recent models in modern neuroscience treat the brain as a biological computer, very different in mechanism from a digital computer, but similar in the sense that it acquires information from the surrounding world, stores it, and processes it in a variety of ways.
This article compares the properties of brains across the entire range of animal species, with the greatest attention to vertebrates. It deals with the human brain insofar as it shares the properties of other brains. The ways in which the human brain differs from other brains are covered in the human brain article. Several topics that might be covered here are instead covered there because much more can be said about them in a human context. The most important that are covered in the human brain article are brain disease and the effects of brain damage.
Anatomy
The shape and size of the brain varies greatly between species, and identifying common features is often difficult. Nevertheless, there are a number of principles of brain architecture that apply across a wide range of species. Some aspects of brain structure are common to almost the entire range of animal species; others distinguish "advanced" brains from more primitive ones, or distinguish vertebrates from invertebrates.
The simplest way to gain information about brain anatomy is by visual inspection, but many more sophisticated techniques have been developed. Brain tissue in its natural state is too soft to work with, but it can be hardened by immersion in alcohol or other fixatives, and then sliced apart for examination of the interior. Visually, the interior of the brain consists of areas of so-called grey matter, with a dark color, separated by areas of white matter, with a lighter color. Further information can be gained by staining slices of brain tissue with a variety of chemicals that bring out areas where specific types of molecules are present in high concentrations. It is also possible to examine the microstructure of brain tissue using a microscope, and to trace the pattern of connections from one brain area to another.
Cellular structure
The brains of all species are composed primarily of two broad classes of cells: neurons and glial cells. Glial cells (also known as glia or neuroglia) come in several types, and perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Neurons, however, are usually considered the most important cells in the brain.
The property that makes neurons unique is their ability to send signals to specific target cells over long distances. They send these signals by means of an axon, which is a thin protoplasmic fiber that extends from the cell body and projects, usually with numerous branches, to other areas, sometimes nearby, sometimes in distant parts of the brain or body. The length of an axon can be extraordinary: for example, if a pyramidal cell (an excitatory neuron) of the cerebral cortex were magnified so that its cell body became the size of a human body, its axon, equally magnified, would become a cable a few centimeters in diameter, extending more than a kilometer. These axons transmit signals in the form of electrochemical pulses called action potentials, which last less than a thousandth of a second and travel along the axon at speeds of 1–100 meters per second. Some neurons emit action potentials constantly, at rates of 10–100 per second, usually in irregular patterns; other neurons are quiet most of the time, but occasionally emit a burst of action potentials.
Axons transmit signals to other neurons by means of specialized junctions called synapses. A single axon may make as many as several thousand synaptic connections with other cells. When an action potential, traveling along an axon, arrives at a synapse, it causes a chemical called a neurotransmitter to be released. The neurotransmitter binds to receptor molecules in the membrane of the target cell.
Synapses are the key functional elements of the brain. The essential function of the brain is cell-to-cell communication, and synapses are the points at which communication occurs. The human brain has been estimated to contain approximately 100 trillion synapses; even the brain of a fruit fly contains several million. The functions of these synapses are very diverse: some are excitatory (exciting the target cell); others are inhibitory; others work by activating second messenger systems that change the internal chemistry of their target cells in complex ways. A large number of synapses are dynamically modifiable; that is, they are capable of changing strength in a way that is controlled by the patterns of signals that pass through them. It is widely believed that activity-dependent modification of synapses is the brain's primary mechanism for learning and memory.
Most of the space in the brain is taken up by axons, which are often bundled together in what are called nerve fiber tracts. A myelinated axon is wrapped in a fatty insulating sheath of myelin, which serves to greatly increase the speed of signal propagation. (There are also unmyelinated axons). Myelin is white, making parts of the brain filled exclusively with nerve fibers appear as light-colored white matter, in contrast to the darker-colored grey matter that marks areas with high densities of neuron cell bodies.
Evolution
Generic bilaterian nervous system
Except for a few primitive organisms such as sponges (which have no nervous system) and cnidarians (which have a diffuse nervous system consisting of a nerve net), all living multicellular animals are bilaterians, meaning animals with a bilaterally symmetric body plan (that is, left and right sides that are approximate mirror images of each other). All bilaterians are thought to have descended from a common ancestor that appeared late in the Cryogenian period, 700–650 million years ago, and it has been hypothesized that this common ancestor had the shape of a simple tubeworm with a segmented body. At a schematic level, that basic worm-shape continues to be reflected in the body and nervous system architecture of all modern bilaterians, including vertebrates. The fundamental bilateral body form is a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The brain is small and simple in some species, such as nematode worms; in other species, including vertebrates, it is the most complex organ in the body. Some types of worms, such as leeches, also have an enlarged ganglion at the back end of the nerve cord, known as a "tail brain".
There are a few types of existing bilaterians that lack a recognizable brain, including echinoderms and tunicates. It has not been definitively established whether the existence of these brainless species indicates that the earliest bilaterians lacked a brain, or whether their ancestors evolved in a way that led to the disappearance of a previously existing brain structure.
Invertebrates
This category includes tardigrades, arthropods, molluscs, and numerous types of worms. The diversity of invertebrate body plans is matched by an equal diversity in brain structures.
Two groups of invertebrates have notably complex brains: arthropods (insects, crustaceans, arachnids, and others), and cephalopods (octopuses, squids, and similar molluscs). The brains of arthropods and cephalopods arise from twin parallel nerve cords that extend through the body of the animal. Arthropods have a central brain, the supraesophageal ganglion, with three divisions and large optical lobes behind each eye for visual processing. Cephalopods such as the octopus and squid have the largest brains of any invertebrates.
There are several invertebrate species whose brains have been studied intensively because they have properties that make them convenient for experimental work:
Fruit flies (Drosophila), because of the large array of techniques available for studying their genetics, have been a natural subject for studying the role of genes in brain development. In spite of the large evolutionary distance between insects and mammals, many aspects of Drosophila neurogenetics have been shown to be relevant to humans. The first biological clock genes, for example, were identified by examining Drosophila mutants that showed disrupted daily activity cycles. A search in the genomes of vertebrates revealed a set of analogous genes, which were found to play similar roles in the mouse biological clock—and therefore almost certainly in the human biological clock as well. Studies done on Drosophila, also show that most neuropil regions of the brain are continuously reorganized throughout life in response to specific living conditions.
The nematode worm Caenorhabditis elegans, like Drosophila, has been studied largely because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model organism for studying the way that genes control development. One of the advantages of working with this worm is that the body plan is very stereotyped: the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed each one under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body. The complete neuronal wiring diagram of C.elegans – its connectome was achieved. Nothing approaching this level of detail is available for any other organism, and the information gained has enabled a multitude of studies that would otherwise have not been possible.
The sea slug Aplysia californica was chosen by Nobel Prize-winning neurophysiologist Eric Kandel as a model for studying the cellular basis of learning and memory, because of the simplicity and accessibility of its nervous system, and it has been examined in hundreds of experiments.
Vertebrates
The first vertebrates appeared over 500 million years ago (Mya), during the Cambrian period, and may have resembled the modern hagfish in form. Jawed fish appeared by 445 Mya, amphibians by 350 Mya, reptiles by 310 Mya and mammals by 200 Mya (approximately). Each species has an equally long evolutionary history, but the brains of modern hagfishes, lampreys, sharks, amphibians, reptiles, and mammals show a gradient of size and complexity that roughly follows the evolutionary sequence. All of these brains contain the same set of basic anatomical components, but many are rudimentary in the hagfish, whereas in mammals the foremost part (the telencephalon) is greatly elaborated and expanded.
Brains are most commonly compared in terms of their size. The relationship between brain size, body size and other variables has been studied across a wide range of vertebrate species. As a rule, brain size increases with body size, but not in a simple linear proportion. In general, smaller animals tend to have larger brains, measured as a fraction of body size. For mammals, the relationship between brain volume and body mass essentially follows a power law with an exponent of about 0.75. This formula describes the central tendency, but every family of mammals departs from it to some degree, in a way that reflects in part the complexity of their behavior. For example, primates have brains 5 to 10 times larger than the formula predicts. Predators tend to have larger brains than their prey, relative to body size.
All vertebrate brains share a common underlying form, which appears most clearly during early stages of embryonic development. In its earliest form, the brain appears as three swellings at the front end of the neural tube; these swellings eventually become the forebrain, midbrain, and hindbrain (the prosencephalon, mesencephalon, and rhombencephalon, respectively). At the earliest stages of brain development, the three areas are roughly equal in size. In many classes of vertebrates, such as fish and amphibians, the three parts remain similar in size in the adult, but in mammals the forebrain becomes much larger than the other parts, and the midbrain becomes very small.
The brains of vertebrates are made of very soft tissue. Living brain tissue is pinkish on the outside and mostly white on the inside, with subtle variations in color. Vertebrate brains are surrounded by a system of connective tissue membranes called meninges that separate the skull from the brain. Blood vessels enter the central nervous system through holes in the meningeal layers. The cells in the blood vessel walls are joined tightly to one another, forming the blood–brain barrier, which blocks the passage of many toxins and pathogens (though at the same time blocking antibodies and some drugs, thereby presenting special challenges in treatment of diseases of the brain).
Neuroanatomists usually divide the vertebrate brain into six main regions: the telencephalon (cerebral hemispheres), diencephalon (thalamus and hypothalamus), mesencephalon (midbrain), cerebellum, pons, and medulla oblongata. Each of these areas has a complex internal structure. Some parts, such as the cerebral cortex and the cerebellar cortex, consist of layers that are folded or convoluted to fit within the available space. Other parts, such as the thalamus and hypothalamus, consist of clusters of many small nuclei. Thousands of distinguishable areas can be identified within the vertebrate brain based on fine distinctions of neural structure, chemistry, and connectivity.
Although the same basic components are present in all vertebrate brains, some branches of vertebrate evolution have led to substantial distortions of brain geometry, especially in the forebrain area. The brain of a shark shows the basic components in a straightforward way, but in teleost fishes (the great majority of existing fish species), the forebrain has become "everted", like a sock turned inside out. In birds, there are also major changes in forebrain structure. These distortions can make it difficult to match brain components from one species with those of another species.
Here is a list of some of the most important vertebrate brain components, along with a brief description of their functions as currently understood:
The medulla, along with the spinal cord, contains many small nuclei involved in a wide variety of sensory and involuntary motor functions such as vomiting, heart rate and digestive processes.
The pons lies in the brainstem directly above the medulla. Among other things, it contains nuclei that control often voluntary but simple acts such as sleep, respiration, swallowing, bladder function, equilibrium, eye movement, facial expressions, and posture.
The hypothalamus is a small region at the base of the forebrain, whose complexity and importance belies its size. It is composed of numerous small nuclei, each with distinct connections and neurochemistry. The hypothalamus is engaged in additional involuntary or partially voluntary acts such as sleep and wake cycles, eating and drinking, and the release of some hormones.
The thalamus is a collection of nuclei with diverse functions: some are involved in relaying information to and from the cerebral hemispheres, while others are involved in motivation. The subthalamic area (zona incerta) seems to contain action-generating systems for several types of "consummatory" behaviors such as eating, drinking, defecation, and copulation.
The cerebellum modulates the outputs of other brain systems, whether motor-related or thought related, to make them certain and precise. Removal of the cerebellum does not prevent an animal from doing anything in particular, but it makes actions hesitant and clumsy. This precision is not built-in but learned by trial and error. The muscle coordination learned while riding a bicycle is an example of a type of neural plasticity that may take place largely within the cerebellum. 10% of the brain's total volume consists of the cerebellum and 50% of all neurons are held within its structure.
The optic tectum allows actions to be directed toward points in space, most commonly in response to visual input. In mammals, it is usually referred to as the superior colliculus, and its best-studied function is to direct eye movements. It also directs reaching movements and other object-directed actions. It receives strong visual inputs, but also inputs from other senses that are useful in directing actions, such as auditory input in owls and input from the thermosensitive pit organs in snakes. In some primitive fishes, such as lampreys, this region is the largest part of the brain. The superior colliculus is part of the midbrain.
The pallium is a layer of grey matter that lies on the surface of the forebrain and is the most complex and most recent evolutionary development of the brain as an organ. In reptiles and mammals, it is called the cerebral cortex. Multiple functions involve the pallium, including smell and spatial memory. In mammals, where it becomes so large as to dominate the brain, it takes over functions from many other brain areas. In many mammals, the cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci. The folds increase the surface area of the cortex and therefore increase the amount of gray matter and the amount of information that can be stored and processed.
The hippocampus, strictly speaking, is found only in mammals. However, the area it derives from, the medial pallium, has counterparts in all vertebrates. There is evidence that this part of the brain is involved in complex events such as spatial memory and navigation in fishes, birds, reptiles, and mammals.
The basal ganglia are a group of interconnected structures in the forebrain. The primary function of the basal ganglia appears to be action selection: they send inhibitory signals to all parts of the brain that can generate motor behaviors, and in the right circumstances can release the inhibition, so that the action-generating systems are able to execute their actions. Reward and punishment exert their most important neural effects by altering connections within the basal ganglia.
The olfactory bulb is a special structure that processes olfactory sensory signals and sends its output to the olfactory part of the pallium. It is a major brain component in many vertebrates, but is greatly reduced in humans and other primates (whose senses are dominated by information acquired by sight rather than smell).
Reptiles
Reptiles have a brain.
Birds
Mammals
The most obvious difference between the brains of mammals and other vertebrates is in terms of size. On average, a mammal has a brain roughly twice as large as that of a bird of the same body size, and ten times as large as that of a reptile of the same body size.
Size, however, is not the only difference: there are also substantial differences in shape. The hindbrain and midbrain of mammals are generally similar to those of other vertebrates, but dramatic differences appear in the forebrain, which is greatly enlarged and also altered in structure. The cerebral cortex is the part of the brain that most strongly distinguishes mammals. In non-mammalian vertebrates, the surface of the cerebrum is lined with a comparatively simple three-layered structure called the pallium. In mammals, the pallium evolves into a complex six-layered structure called neocortex or isocortex. Several areas at the edge of the neocortex, including the hippocampus and amygdala, are also much more extensively developed in mammals than in other vertebrates.
The elaboration of the cerebral cortex carries with it changes to other brain areas. The superior colliculus, which plays a major role in visual control of behavior in most vertebrates, shrinks to a small size in mammals, and many of its functions are taken over by visual areas of the cerebral cortex. The cerebellum of mammals contains a large portion (the neocerebellum) dedicated to supporting the cerebral cortex, which has no counterpart in other vertebrates.
Primates
The brains of humans and other primates contain the same structures as the brains of other mammals, but are generally larger in proportion to body size. The encephalization quotient (EQ) is used to compare brain sizes across species. It takes into account the nonlinearity of the brain-to-body relationship. Humans have an average EQ in the 7-to-8 range, while most other primates have an EQ in the 2-to-3 range. Dolphins have values higher than those of primates other than humans, but nearly all other mammals have EQ values that are substantially lower.
Most of the enlargement of the primate brain comes from a massive expansion of the cerebral cortex, especially the prefrontal cortex and the parts of the cortex involved in vision. The visual processing network of primates includes at least 30 distinguishable brain areas, with a complex web of interconnections. It has been estimated that visual processing areas occupy more than half of the total surface of the primate neocortex. The prefrontal cortex carries out functions that include planning, working memory, motivation, attention, and executive control. It takes up a much larger proportion of the brain for primates than for other species, and an especially large fraction of the human brain.
Development
The brain develops in an intricately orchestrated sequence of stages. It changes in shape from a simple swelling at the front of the nerve cord in the earliest embryonic stages, to a complex array of areas and connections. Neurons are created in special zones that contain stem cells, and then migrate through the tissue to reach their ultimate locations. Once neurons have positioned themselves, their axons sprout and navigate through the brain, branching and extending as they go, until the tips reach their targets and form synaptic connections. In a number of parts of the nervous system, neurons and synapses are produced in excessive numbers during the early stages, and then the unneeded ones are pruned away.
For vertebrates, the early stages of neural development are similar across all species. As the embryo transforms from a round blob of cells into a wormlike structure, a narrow strip of ectoderm running along the midline of the back is induced to become the neural plate, the precursor of the nervous system. The neural plate folds inward to form the neural groove, and then the lips that line the groove merge to enclose the neural tube, a hollow cord of cells with a fluid-filled ventricle at the center. At the front end, the ventricles and cord swell to form three vesicles that are the precursors of the prosencephalon (forebrain), mesencephalon (midbrain), and rhombencephalon (hindbrain). At the next stage, the forebrain splits into two vesicles called the telencephalon (which will contain the cerebral cortex, basal ganglia, and related structures) and the diencephalon (which will contain the thalamus and hypothalamus). At about the same time, the hindbrain splits into the metencephalon (which will contain the cerebellum and pons) and the myelencephalon (which will contain the medulla oblongata). Each of these areas contains proliferative zones where neurons and glial cells are generated; the resulting cells then migrate, sometimes for long distances, to their final positions.
Once a neuron is in place, it extends dendrites and an axon into the area around it. Axons, because they commonly extend a great distance from the cell body and need to reach specific targets, grow in a particularly complex way. The tip of a growing axon consists of a blob of protoplasm called a growth cone, studded with chemical receptors. These receptors sense the local environment, causing the growth cone to be attracted or repelled by various cellular elements, and thus to be pulled in a particular direction at each point along its path. The result of this pathfinding process is that the growth cone navigates through the brain until it reaches its destination area, where other chemical cues cause it to begin generating synapses. Considering the entire brain, thousands of genes create products that influence axonal pathfinding.
The synaptic network that finally emerges is only partly determined by genes, though. In many parts of the brain, axons initially "overgrow", and then are "pruned" by mechanisms that depend on neural activity. In the projection from the eye to the midbrain, for example, the structure in the adult contains a very precise mapping, connecting each point on the surface of the retina to a corresponding point in a midbrain layer. In the first stages of development, each axon from the retina is guided to the right general vicinity in the midbrain by chemical cues, but then branches very profusely and makes initial contact with a wide swath of midbrain neurons. The retina, before birth, contains special mechanisms that cause it to generate waves of activity that originate spontaneously at a random point and then propagate slowly across the retinal layer. These waves are useful because they cause neighboring neurons to be active at the same time; that is, they produce a neural activity pattern that contains information about the spatial arrangement of the neurons. This information is exploited in the midbrain by a mechanism that causes synapses to weaken, and eventually vanish, if activity in an axon is not followed by activity of the target cell. The result of this sophisticated process is a gradual tuning and tightening of the map, leaving it finally in its precise adult form.
Similar things happen in other brain areas: an initial synaptic matrix is generated as a result of genetically determined chemical guidance, but then gradually refined by activity-dependent mechanisms, partly driven by internal dynamics, partly by external sensory inputs. In some cases, as with the retina-midbrain system, activity patterns depend on mechanisms that operate only in the developing brain, and apparently exist solely to guide development.
In humans and many other mammals, new neurons are created mainly before birth, and the infant brain contains substantially more neurons than the adult brain. There are, however, a few areas where new neurons continue to be generated throughout life. The two areas for which adult neurogenesis is well established are the olfactory bulb, which is involved in the sense of smell, and the dentate gyrus of the hippocampus, where there is evidence that the new neurons play a role in storing newly acquired memories. With these exceptions, however, the set of neurons that is present in early childhood is the set that is present for life. Glial cells are different: as with most types of cells in the body, they are generated throughout the lifespan.
There has long been debate about whether the qualities of mind, personality, and intelligence can be attributed to heredity or to upbringing—this is the nature and nurture controversy. Although many details remain to be settled, neuroscience research has clearly shown that both factors are important. Genes determine the general form of the brain, and genes determine how the brain reacts to experience. Experience, however, is required to refine the matrix of synaptic connections, which in its developed form contains far more information than the genome does. In some respects, all that matters is the presence or absence of experience during critical periods of development. In other respects, the quantity and quality of experience are important; for example, there is substantial evidence that animals raised in enriched environments have thicker cerebral cortices, indicating a higher density of synaptic connections, than animals whose levels of stimulation are restricted.
Physiology
The functions of the brain depend on the ability of neurons to transmit electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received from other cells. The electrical properties of neurons are controlled by a wide variety of biochemical and metabolic processes, most notably the interactions between neurotransmitters and receptors that take place at synapses.
Neurotransmitters and receptors
Neurotransmitters are chemicals that are released at synapses when the local membrane is depolarised and Ca2+ enters into the cell, typically when an action potential arrives at the synapse – neurotransmitters attach themselves to receptor molecules on the membrane of the synapse's target cell (or cells), and thereby alter the electrical or chemical properties of the receptor molecules. With few exceptions, each neuron in the brain releases the same chemical neurotransmitter, or combination of neurotransmitters, at all the synaptic connections it makes with other neurons; this rule is known as Dale's principle. Thus, a neuron can be characterized by the neurotransmitters that it releases. The great majority of psychoactive drugs exert their effects by altering specific neurotransmitter systems. This applies to drugs such as cannabinoids, nicotine, heroin, cocaine, alcohol, fluoxetine, chlorpromazine, and many others.
The two neurotransmitters that are most widely found in the vertebrate brain are glutamate, which almost always exerts excitatory effects on target neurons, and gamma-aminobutyric acid (GABA), which is almost always inhibitory. Neurons using these transmitters can be found in nearly every part of the brain. Because of their ubiquity, drugs that act on glutamate or GABA tend to have broad and powerful effects. Some general anesthetics act by reducing the effects of glutamate; most tranquilizers exert their sedative effects by enhancing the effects of GABA.
There are dozens of other chemical neurotransmitters that are used in more limited areas of the brain, often areas dedicated to a particular function. Serotonin, for example—the primary target of many antidepressant drugs and many dietary aids—comes exclusively from a small brainstem area called the raphe nuclei. Norepinephrine, which is involved in arousal, comes exclusively from a nearby small area called the locus coeruleus. Other neurotransmitters such as acetylcholine and dopamine have multiple sources in the brain but are not as ubiquitously distributed as glutamate and GABA.
Electrical activity
As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task, called beta and gamma waves. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology.
Metabolism
All vertebrates have a blood–brain barrier that allows metabolism inside the brain to operate differently from metabolism in other parts of the body. The neurovascular unit regulates cerebral blood flow so that activated neurons can be supplied with energy. Glial cells play a major role in brain metabolism by controlling the chemical composition of the fluid that surrounds neurons, including levels of ions and nutrients.
Brain tissue consumes a large amount of energy in proportion to its volume, so large brains place severe metabolic demands on animals. The need to limit body weight in order, for example, to fly, has apparently led to selection for a reduction of brain size in some species, such as bats. Most of the brain's energy consumption goes into sustaining the electric charge (membrane potential) of neurons. Most vertebrate species devote between 2% and 8% of basal metabolism to the brain. In primates, however, the percentage is much higher—in humans it rises to 20–25%. The energy consumption of the brain does not vary greatly over time, but active regions of the cerebral cortex consume somewhat more energy than inactive regions; this forms the basis for the functional brain imaging methods of PET, fMRI, and NIRS. The brain typically gets most of its energy from oxygen-dependent metabolism of glucose (i.e., blood sugar), but ketones provide a major alternative source, together with contributions from medium chain fatty acids (caprylic and heptanoic acids), lactate, acetate, and possibly amino acids.
Function
Information from the sense organs is collected in the brain. There it is used to determine what actions the organism is to take. The brain processes the raw data to extract information about the structure of the environment. Next it combines the processed information with information about the current needs of the animal and with memory of past circumstances. Finally, on the basis of the results, it generates motor response patterns. These signal-processing tasks require intricate interplay between a variety of functional subsystems.
The function of the brain is to provide coherent control over the actions of an animal. A centralized brain allows groups of muscles to be co-activated in complex patterns; it also allows stimuli impinging on one part of the body to evoke responses in other parts, and it can prevent different parts of the body from acting at cross-purposes to each other.
Perception
The human brain is provided with information about light, sound, the chemical composition of the atmosphere, temperature, the position of the body in space (proprioception), the chemical composition of the bloodstream, and more. In other animals additional senses are present, such as the infrared heat-sense of snakes, the magnetic field sense of some birds, or the electric field sense mainly seen in aquatic animals.
Each sensory system begins with specialized receptor cells, such as photoreceptor cells in the retina of the eye, or vibration-sensitive hair cells in the cochlea of the ear. The axons of sensory receptor cells travel into the spinal cord or brain, where they transmit their signals to a first-order sensory nucleus dedicated to one specific sensory modality. This primary sensory nucleus sends information to higher-order sensory areas that are dedicated to the same modality. Eventually, via a way-station in the thalamus, the signals are sent to the cerebral cortex, where they are processed to extract the relevant features, and integrated with signals coming from other sensory systems.
Motor control
Motor systems are areas of the brain that are involved in initiating body movements, that is, in activating muscles. Except for the muscles that control the eye, which are driven by nuclei in the midbrain, all the voluntary muscles in the body are directly innervated by motor neurons in the spinal cord and hindbrain. Spinal motor neurons are controlled both by neural circuits intrinsic to the spinal cord, and by inputs that descend from the brain. The intrinsic spinal circuits implement many reflex responses, and contain pattern generators for rhythmic movements such as walking or swimming. The descending connections from the brain allow for more sophisticated control.
The brain contains several motor areas that project directly to the spinal cord. At the lowest level are motor areas in the medulla and pons, which control stereotyped movements such as walking, breathing, or swallowing. At a higher level are areas in the midbrain, such as the red nucleus, which is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex, a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to the subcortical motor areas, but also sends a massive projection directly to the spinal cord, through the pyramidal tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements. Other motor-related brain areas exert secondary effects by projecting to the primary motor areas. Among the most important secondary areas are the premotor cortex, supplementary motor area, basal ganglia, and cerebellum. In addition to all of the above, the brain and spinal cord contain extensive circuitry to control the autonomic nervous system which controls the movement of the smooth muscle of the body.
Sleep
Many animals alternate between sleeping and waking in a daily cycle. Arousal and alertness are also modulated on a finer time scale by a network of brain areas. A key component of the sleep system is the suprachiasmatic nucleus (SCN), a tiny part of the hypothalamus located directly above the point at which the optic nerves from the two eyes cross. The SCN contains the body's central biological clock. Neurons there show activity levels that rise and fall with a period of about 24 hours, circadian rhythms: these activity fluctuations are driven by rhythmic changes in expression of a set of "clock genes". The SCN continues to keep time even if it is excised from the brain and placed in a dish of warm nutrient solution, but it ordinarily receives input from the optic nerves, through the retinohypothalamic tract (RHT), that allows daily light-dark cycles to calibrate the clock.
The SCN projects to a set of areas in the hypothalamus, brainstem, and midbrain that are involved in implementing sleep-wake cycles. An important component of the system is the reticular formation, a group of neuron-clusters scattered diffusely through the core of the lower brain. Reticular neurons send signals to the thalamus, which in turn sends activity-level-controlling signals to every part of the cortex. Damage to the reticular formation can produce a permanent state of coma.
Sleep involves great changes in brain activity. Until the 1950s it was generally believed that the brain essentially shuts off during sleep, but this is now known to be far from true; activity continues, but patterns become very different. There are two types of sleep: REM sleep (with dreaming) and NREM (non-REM, usually without dreaming) sleep, which repeat in slightly varying patterns throughout a sleep episode. Three broad types of distinct brain activity patterns can be measured: REM, light NREM and deep NREM. During deep NREM sleep, also called slow wave sleep, activity in the cortex takes the form of large synchronized waves, whereas in the waking state it is noisy and desynchronized. Levels of the neurotransmitters norepinephrine and serotonin drop during slow wave sleep, and fall almost to zero during REM sleep; levels of acetylcholine show the reverse pattern.
Homeostasis
For any animal, survival requires maintaining a variety of parameters of bodily state within a limited range of variation: these include temperature, water content, salt concentration in the bloodstream, blood glucose levels, blood oxygen level, and others. The ability of an animal to regulate the internal environment of its body—the milieu intérieur, as the pioneering physiologist Claude Bernard called it—is known as homeostasis (Greek for "standing still"). Maintaining homeostasis is a crucial function of the brain. The basic principle that underlies homeostasis is negative feedback: any time a parameter diverges from its set-point, sensors generate an error signal that evokes a response that causes the parameter to shift back toward its optimum value. (This principle is widely used in engineering, for example in the control of temperature using a thermostat.)
In vertebrates, the part of the brain that plays the greatest role is the hypothalamus, a small region at the base of the forebrain whose size does not reflect its complexity or the importance of its function. The hypothalamus is a collection of small nuclei, most of which are involved in basic biological functions. Some of these functions relate to arousal or to social interactions such as sexuality, aggression, or maternal behaviors; but many of them relate to homeostasis. Several hypothalamic nuclei receive input from sensors located in the lining of blood vessels, conveying information about temperature, sodium level, glucose level, blood oxygen level, and other parameters. These hypothalamic nuclei send output signals to motor areas that can generate actions to rectify deficiencies. Some of the outputs also go to the pituitary gland, a tiny gland attached to the brain directly underneath the hypothalamus. The pituitary gland secretes hormones into the bloodstream, where they circulate throughout the body and induce changes in cellular activity.
Motivation
The individual animals need to express survival-promoting behaviors, such as seeking food, water, shelter, and a mate. The motivational system in the brain monitors the current state of satisfaction of these goals, and activates behaviors to meet any needs that arise. The motivational system works largely by a reward–punishment mechanism. When a particular behavior is followed by favorable consequences, the reward mechanism in the brain is activated, which induces structural changes inside the brain that cause the same behavior to be repeated later, whenever a similar situation arises. Conversely, when a behavior is followed by unfavorable consequences, the brain's punishment mechanism is activated, inducing structural changes that cause the behavior to be suppressed when similar situations arise in the future.
Most organisms studied to date use a reward–punishment mechanism: for instance, worms and insects can alter their behavior to seek food sources or to avoid dangers. In vertebrates, the reward-punishment system is implemented by a specific set of brain structures, at the heart of which lie the basal ganglia, a set of interconnected areas at the base of the forebrain. The basal ganglia are the central site at which decisions are made: the basal ganglia exert a sustained inhibitory control over most of the motor systems in the brain; when this inhibition is released, a motor system is permitted to execute the action it is programmed to carry out. Rewards and punishments function by altering the relationship between the inputs that the basal ganglia receive and the decision-signals that are emitted. The reward mechanism is better understood than the punishment mechanism, because its role in drug abuse has caused it to be studied very intensively. Research has shown that the neurotransmitter dopamine plays a central role: addictive drugs such as cocaine, amphetamine, and nicotine either cause dopamine levels to rise or cause the effects of dopamine inside the brain to be enhanced.
Learning and memory
Almost all animals are capable of modifying their behavior as a result of experience—even the most primitive types of worms. Because behavior is driven by brain activity, changes in behavior must somehow correspond to changes inside the brain. Already in the late 19th century theorists like Santiago Ramón y Cajal argued that the most plausible explanation is that learning and memory are expressed as changes in the synaptic connections between neurons. Until 1970, however, experimental evidence to support the synaptic plasticity hypothesis was lacking. In 1971 Tim Bliss and Terje Lømo published a paper on a phenomenon now called long-term potentiation: the paper showed clear evidence of activity-induced synaptic changes that lasted for at least several days. Since then technical advances have made these sorts of experiments much easier to carry out, and thousands of studies have been made that have clarified the mechanism of synaptic change, and uncovered other types of activity-driven synaptic change in a variety of brain areas, including the cerebral cortex, hippocampus, basal ganglia, and cerebellum. Brain-derived neurotrophic factor (BDNF) and physical activity appear to play a beneficial role in the process.
Neuroscientists currently distinguish several types of learning and memory that are implemented by the brain in distinct ways:
Working memory is the ability of the brain to maintain a temporary representation of information about the task that an animal is currently engaged in. This sort of dynamic memory is thought to be mediated by the formation of cell assemblies—groups of activated neurons that maintain their activity by constantly stimulating one another.
Episodic memory is the ability to remember the details of specific events. This sort of memory can last for a lifetime. Much evidence implicates the hippocampus in playing a crucial role: people with severe damage to the hippocampus sometimes show amnesia, that is, inability to form new long-lasting episodic memories.
Semantic memory is the ability to learn facts and relationships. This sort of memory is probably stored largely in the cerebral cortex, mediated by changes in connections between cells that represent specific types of information.
Instrumental learning is the ability for rewards and punishments to modify behavior. It is implemented by a network of brain areas centered on the basal ganglia.
Motor learning is the ability to refine patterns of body movement by practicing, or more generally by repetition. A number of brain areas are involved, including the premotor cortex, basal ganglia, and especially the cerebellum, which functions as a large memory bank for microadjustments of the parameters of movement.
Research
The field of neuroscience encompasses all approaches that seek to understand the brain and the rest of the nervous system. Psychology seeks to understand mind and behavior, and neurology is the medical discipline that diagnoses and treats diseases of the nervous system. The brain is also the most important organ studied in psychiatry, the branch of medicine that works to study, prevent, and treat mental disorders. Cognitive science seeks to unify neuroscience and psychology with other fields that concern themselves with the brain, such as computer science (artificial intelligence and similar fields) and philosophy.
The oldest method of studying the brain is anatomical, and until the middle of the 20th century, much of the progress in neuroscience came from the development of better cell stains and better microscopes. Neuroanatomists study the large-scale structure of the brain as well as the microscopic structure of neurons and their components, especially synapses. Among other tools, they employ a plethora of stains that reveal neural structure, chemistry, and connectivity. In recent years, the development of immunostaining techniques has allowed investigation of neurons that express specific sets of genes. Also, functional neuroanatomy uses medical imaging techniques to correlate variations in human brain structure with differences in cognition or behavior.
Neurophysiologists study the chemical, pharmacological, and electrical properties of the brain: their primary tools are drugs and recording devices. Thousands of experimentally developed drugs affect the nervous system, some in highly specific ways. Recordings of brain activity can be made using electrodes, either glued to the scalp as in EEG studies, or implanted inside the brains of animals for extracellular recordings, which can detect action potentials generated by individual neurons. Because the brain does not contain pain receptors, it is possible using these techniques to record brain activity from animals that are awake and behaving without causing distress. The same techniques have occasionally been used to study brain activity in human patients with intractable epilepsy, in cases where there was a medical necessity to implant electrodes to localize the brain area responsible for epileptic seizures. Functional imaging techniques such as fMRI are also used to study brain activity; these techniques have mainly been used with human subjects, because they require a conscious subject to remain motionless for long periods of time, but they have the great advantage of being noninvasive.
Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior.
Computational neuroscience encompasses two approaches: first, the use of computers to study the brain; second, the study of how brains perform computation. On one hand, it is possible to write a computer program to simulate the operation of a group of neurons by making use of systems of equations that describe their electrochemical activity; such simulations are known as biologically realistic neural networks. On the other hand, it is possible to study algorithms for neural computation by simulating, or mathematically analyzing, the operations of simplified "units" that have some of the properties of neurons but abstract out much of their biological complexity. The computational functions of the brain are studied both by computer scientists and neuroscientists.
Computational neurogenetic modeling is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes.
Recent years have seen increasing applications of genetic and genomic techniques to the study of the brain and a focus on the roles of neurotrophic factors and physical activity in neuroplasticity. The most common subjects are mice, because of the availability of technical tools. It is now possible with relative ease to "knock out" or mutate a wide variety of genes, and then examine the effects on brain function. More sophisticated approaches are also being used: for example, using Cre-Lox recombination it is possible to activate or deactivate genes in specific parts of the brain, at specific times.
History
The oldest brain to have been discovered was in Armenia in the Areni-1 cave complex. The brain, estimated to be over 5,000 years old, was found in the skull of a 12 to 14-year-old girl. Although the brains were shriveled, they were well preserved due to the climate found inside the cave.
Early philosophers were divided as to whether the seat of the soul lies in the brain or heart. Aristotle favored the heart, and thought that the function of the brain was merely to cool the blood. Democritus, the inventor of the atomic theory of matter, argued for a three-part soul, with intellect in the head, emotion in the heart, and lust near the liver. The unknown author of On the Sacred Disease, a medical treatise in the Hippocratic Corpus, came down unequivocally in favor of the brain, writing:
The Roman physician Galen also argued for the importance of the brain, and theorized in some depth about how it might work. Galen traced out the anatomical relationships among brain, nerves, and muscles, demonstrating that all muscles in the body are connected to the brain through a branching network of nerves. He postulated that nerves activate muscles mechanically by carrying a mysterious substance he called pneumata psychikon, usually translated as "animal spirits". Galen's ideas were widely known during the Middle Ages, but not much further progress came until the Renaissance, when detailed anatomical study resumed, combined with the theoretical speculations of René Descartes and those who followed him. Descartes, like Galen, thought of the nervous system in hydraulic terms. He believed that the highest cognitive functions are carried out by a non-physical res cogitans, but that the majority of behaviors of humans, and all behaviors of animals, could be explained mechanistically.
The first real progress toward a modern understanding of nervous function, though, came from the investigations of Luigi Galvani (1737–1798), who discovered that a shock of static electricity applied to an exposed nerve of a dead frog could cause its leg to contract. Since that time, each major advance in understanding has followed more or less directly from the development of a new technique of investigation. Until the early years of the 20th century, the most important advances were derived from new methods for staining cells. Particularly critical was the invention of the Golgi stain, which (when correctly used) stains only a small fraction of neurons, but stains them in their entirety, including cell body, dendrites, and axon. Without such a stain, brain tissue under a microscope appears as an impenetrable tangle of protoplasmic fibers, in which it is impossible to determine any structure. In the hands of Camillo Golgi, and especially of the Spanish neuroanatomist Santiago Ramón y Cajal, the new stain revealed hundreds of distinct types of neurons, each with its own unique dendritic structure and pattern of connectivity.
In the first half of the 20th century, advances in electronics enabled investigation of the electrical properties of nerve cells, culminating in work by Alan Hodgkin, Andrew Huxley, and others on the biophysics of the action potential, and the work of Bernard Katz and others on the electrochemistry of the synapse. These studies complemented the anatomical picture with a conception of the brain as a dynamic entity. Reflecting the new understanding, in 1942 Charles Sherrington visualized the workings of the brain waking from sleep:
The invention of electronic computers in the 1940s, along with the development of mathematical information theory, led to a realization that brains can potentially be understood as information processing systems. This concept formed the basis of the field of cybernetics, and eventually gave rise to the field now known as computational neuroscience. The earliest attempts at cybernetics were somewhat crude in that they treated the brain as essentially a digital computer in disguise, as for example in John von Neumann's 1958 book, The Computer and the Brain. Over the years, though, accumulating information about the electrical responses of brain cells recorded from behaving animals has steadily moved theoretical concepts in the direction of increasing realism.
One of the most influential early contributions was a 1959 paper titled What the frog's eye tells the frog's brain: the paper examined the visual responses of neurons in the retina and optic tectum of frogs, and came to the conclusion that some neurons in the tectum of the frog are wired to combine elementary responses in a way that makes them function as "bug perceivers". A few years later David Hubel and Torsten Wiesel discovered cells in the primary visual cortex of monkeys that become active when sharp edges move across specific points in the field of view—a discovery for which they won a Nobel Prize. Follow-up studies in higher-order visual areas found cells that detect binocular disparity, color, movement, and aspects of shape, with areas located at increasing distances from the primary visual cortex showing increasingly complex responses. Other investigations of brain areas unrelated to vision have revealed cells with a wide variety of response correlates, some related to memory, some to abstract types of cognition such as space.
Theorists have worked to understand these response patterns by constructing mathematical models of neurons and neural networks, which can be simulated using computers. Some useful models are abstract, focusing on the conceptual structure of neural algorithms rather than the details of how they are implemented in the brain; other models attempt to incorporate data about the biophysical properties of real neurons. No model on any level is yet considered to be a fully valid description of brain function, though. The essential difficulty is that sophisticated computation by neural networks requires distributed processing in which hundreds or thousands of neurons work cooperatively—current methods of brain activity recording are only capable of isolating action potentials from a few dozen neurons at a time.
Furthermore, even single neurons appear to be complex and capable of performing computations. So, brain models that do not reflect this are too abstract to be representative of brain operation; models that do try to capture this are very computationally expensive and arguably intractable with present computational resources. However, the Human Brain Project is trying to build a realistic, detailed computational model of the entire human brain. The wisdom of this approach has been publicly contested, with high-profile scientists on both sides of the argument.
In the second half of the 20th century, developments in chemistry, electron microscopy, genetics, computer science, functional brain imaging, and other fields progressively opened new windows into brain structure and function. In the United States, the 1990s were officially designated as the "Decade of the Brain" to commemorate advances made in brain research, and to promote funding for such research.
In the 21st century, these trends have continued, and several new approaches have come into prominence, including multielectrode recording, which allows the activity of many brain cells to be recorded all at the same time; genetic engineering, which allows molecular components of the brain to be altered experimentally; genomics, which allows variations in brain structure to be correlated with variations in DNA properties and neuroimaging.
Society and culture
As food
Animal brains are used as food in numerous cuisines.
In rituals
Some archaeological evidence suggests that the mourning rituals of European Neanderthals also involved the consumption of the brain.
The Fore people of Papua New Guinea are known to eat human brains. In funerary rituals, those close to the dead would eat the brain of the deceased to create a sense of immortality. A prion disease called kuru has been traced to this.
See also
Brain–computer interface
Central nervous system disease
List of neuroscience databases
Neurological disorder
Optogenetics
Outline of neuroscience
Aging brain
References
External links
The Brain from Top to Bottom, at McGill University
"The Brain", BBC Radio 4 discussion with Vivian Nutton, Jonathan Sawday & Marina Wallace (In Our Time, May 8, 2008)
Our Quest to Understand the Brain – with Matthew Cobb Royal Institution lecture. Archived at Ghostarchive.
Animal anatomy
Human anatomy by organ
Organs (anatomy)
|
https://en.wikipedia.org/wiki/Boron
|
Boron is a chemical element with the symbol B and atomic number 5. In its crystalline form it is a brittle, dark, lustrous metalloid; in its amorphous form it is a brown powder. As the lightest element of the boron group it has three valence electrons for forming covalent bonds, resulting in many compounds such as boric acid, the mineral sodium borate, and the ultra-hard crystals of boron carbide and boron nitride.
Boron is synthesized entirely by cosmic ray spallation and supernovae and not by stellar nucleosynthesis, so it is a low-abundance element in the Solar System and in the Earth's crust. It constitutes about 0.001 percent by weight of Earth's crust. It is concentrated on Earth by the water-solubility of its more common naturally occurring compounds, the borate minerals. These are mined industrially as evaporites, such as borax and kernite. The largest known deposits are in Turkey, the largest producer of boron minerals.
Elemental boron is a metalloid that is found in small amounts in meteoroids but chemically uncombined boron is not otherwise found naturally on Earth. Industrially, the very pure element is produced with difficulty because of contamination by carbon or other elements that resist removal. Several allotropes exist: amorphous boron is a brown powder; crystalline boron is silvery to black, extremely hard (about 9.5 on the Mohs scale), and a poor electrical conductor at room temperature. The primary use of the element itself is as boron filaments with applications similar to carbon fibers in some high-strength materials.
Boron is primarily used in chemical compounds. About half of all production consumed globally is an additive in fiberglass for insulation and structural materials. The next leading use is in polymers and ceramics in high-strength, lightweight structural and heat-resistant materials. Borosilicate glass is desired for its greater strength and thermal shock resistance than ordinary soda lime glass. As sodium perborate, it is used as a bleach. A small amount is used as a dopant in semiconductors, and reagent intermediates in the synthesis of organic fine chemicals. A few boron-containing organic pharmaceuticals are used or are in study. Natural boron is composed of two stable isotopes, one of which (boron-10) has a number of uses as a neutron-capturing agent.
The intersection of boron with biology is very small. Consensus on it as essential for mammalian life is lacking. Borates have low toxicity in mammals (similar to table salt) but are more toxic to arthropods and are occasionally used as insecticides. Boron-containing organic antibiotics are known. Although only traces are required, it is an essential plant nutrient.
History
The word boron was coined from borax, the mineral from which it was isolated, by analogy with carbon, which boron resembles chemically.
Borax in its mineral form (then known as tincal) first saw use as a glaze, beginning in China circa 300 AD. Some crude borax traveled westward, and was apparently mentioned by the alchemist Jabir ibn Hayyan around 700 AD. Marco Polo brought some glazes back to Italy in the 13th century. Georgius Agricola, in around 1600, reported the use of borax as a flux in metallurgy. In 1777, boric acid was recognized in the hot springs (soffioni) near Florence, Italy, at which point it became known as sal sedativum, with ostensible medical benefits. The mineral was named sassolite, after Sasso Pisano in Italy. Sasso was the main source of European borax from 1827 to 1872, when American sources replaced it. Boron compounds were relatively rarely used until the late 1800s when Francis Marion Smith's Pacific Coast Borax Company first popularized and produced them in volume at low cost.
Boron was not recognized as an element until it was isolated by Sir Humphry Davy and by Joseph Louis Gay-Lussac and Louis Jacques Thénard. In 1808 Davy observed that electric current sent through a solution of borates produced a brown precipitate on one of the electrodes. In his subsequent experiments, he used potassium to reduce boric acid instead of electrolysis. He produced enough boron to confirm a new element and named it boracium. Gay-Lussac and Thénard used iron to reduce boric acid at high temperatures. By oxidizing boron with air, they showed that boric acid is its oxidation product. Jöns Jacob Berzelius identified it as an element in 1824. Pure boron was arguably first produced by the American chemist Ezekiel Weintraub in 1909.
Preparation of elemental boron in the laboratory
The earliest routes to elemental boron involved the reduction of boric oxide with metals such as magnesium or aluminium. However, the product is almost always contaminated with borides of those metals. Pure boron can be prepared by reducing volatile boron halides with hydrogen at high temperatures. Ultrapure boron for use in the semiconductor industry is produced by the decomposition of diborane at high temperatures and then further purified by the zone melting or Czochralski processes.
The production of boron compounds does not involve the formation of elemental boron, but exploits the convenient availability of borates.
Characteristics
Allotropes
Boron is similar to carbon in its capability to form stable covalently bonded molecular networks. Even nominally disordered (amorphous) boron contains regular boron icosahedra which are bonded randomly to each other without long-range order. Crystalline boron is a very hard, black material with a melting point of above 2000 °C. It forms four major allotropes: α-rhombohedral and β-rhombohedral (α-R and β-R), γ-orthorhombic (γ) and β-tetragonal (β-T). All four phases are stable at ambient conditions, and β-rhombohedral is the most common and stable. An α-tetragonal phase also exists (α-T), but is very difficult to produce without significant contamination. Most of the phases are based on B12 icosahedra, but the γ phase can be described as a rocksalt-type arrangement of the icosahedra and B2 atomic pairs. It can be produced by compressing other boron phases to 12–20 GPa and heating to 1500–1800 °C; it remains stable after releasing the temperature and pressure. The β-T phase is produced at similar pressures, but higher temperatures of 1800–2200 °C. The α-T and β-T phases might coexist at ambient conditions, with the β-T phase being the more stable. Compressing boron above 160 GPa produces a boron phase with an as yet unknown structure, and this phase is a superconductor at temperatures below 6–12 K. Borospherene (fullerene-like B40 molecules) and borophene (proposed graphene-like structure) were described in 2014.
Chemistry of the element
Elemental boron is rare and poorly studied because the pure material is extremely difficult to prepare. Most studies of "boron" involve samples that contain small amounts of carbon. The chemical behavior of boron resembles that of silicon more than aluminium. Crystalline boron is chemically inert and resistant to attack by boiling hydrofluoric or hydrochloric acid. When finely divided, it is attacked slowly by hot concentrated hydrogen peroxide, hot concentrated nitric acid, hot sulfuric acid or hot mixture of sulfuric and chromic acids.
The rate of oxidation of boron depends on the crystallinity, particle size, purity and temperature. Boron does not react with air at room temperature, but at higher temperatures it burns to form boron trioxide:
4 B + 3 O2 → 2 B2O3
Boron undergoes halogenation to give trihalides; for example,
2 B + 3 Br2 → 2 BBr3
The trichloride in practice is usually made from the oxide.
Atomic structure
Boron is the lightest element having an electron in a p-orbital in its ground state. Unlike most other p-elements, it rarely obeys the octet rule and usually places only six electrons (in three molecular orbitals) onto its valence shell. Boron is the prototype for the boron group (the IUPAC group 13), although the other members of this group are metals and more typical p-elements (only aluminium to some extent shares boron's aversion to the octet rule).
Boron also has much lower electronegativity than subsequent period 2 elements. For the latter, lithium salts are common e.g. lithium fluoride, lithium hydroxide, lithium amide, and methyllithium, but lithium boryllides are extraordinarily rare. Strong bases do not deprotonate a borohydride R2BH to the boryl anion R2B−, instead forming the octet-complete adduct R2HB-base.
Chemical compounds
In the most familiar compounds, boron has the formal oxidation state III. These include oxides, sulfides, nitrides, and halides.
The trihalides adopt a planar trigonal structure. These compounds are Lewis acids in that they readily form adducts with electron-pair donors, which are called Lewis bases. For example, fluoride (F−) and boron trifluoride (BF3) combined to give the tetrafluoroborate anion, BF4−. Boron trifluoride is used in the petrochemical industry as a catalyst. The halides react with water to form boric acid.
It is found in nature on Earth almost entirely as various oxides of B(III), often associated with other elements. More than one hundred borate minerals contain boron in oxidation state +3. These minerals resemble silicates in some respect, although it is often found not only in a tetrahedral coordination with oxygen, but also in a trigonal planar configuration. Unlike silicates, boron minerals never contain it with coordination number greater than four. A typical motif is exemplified by the tetraborate anions of the common mineral borax, shown at left. The formal negative charge of the tetrahedral borate center is balanced by metal cations in the minerals, such as the sodium (Na+) in borax. The tourmaline group of borate-silicates is also a very important boron-bearing mineral group, and a number of borosilicates are also known to exist naturally.
Boranes
Boranes are chemical compounds of boron and hydrogen, with the generic formula of BxHy. These compounds do not occur in nature. Many of the boranes readily oxidise on contact with air, some violently. The parent member BH3 is called borane, but it is known only in the gaseous state, and dimerises to form diborane, B2H6. The larger boranes all consist of boron clusters that are polyhedral, some of which exist as isomers. For example, isomers of B20H26 are based on the fusion of two 10-atom clusters.
The most important boranes are diborane B2H6 and two of its pyrolysis products, pentaborane B5H9 and decaborane B10H14. A large number of anionic boron hydrides are known, e.g. [B12H12]2−.
The formal oxidation number in boranes is positive, and is based on the assumption that hydrogen is counted as −1 as in active metal hydrides. The mean oxidation number for the borons is then simply the ratio of hydrogen to boron in the molecule. For example, in diborane B2H6, the boron oxidation state is +3, but in decaborane B10H14, it is 7/5 or +1.4. In these compounds the oxidation state of boron is often not a whole number.
Boron nitrides
The boron nitrides are notable for the variety of structures that they adopt. They exhibit structures analogous to various allotropes of carbon, including graphite, diamond, and nanotubes. In the diamond-like structure, called cubic boron nitride (tradename Borazon), boron atoms exist in the tetrahedral structure of carbon atoms in diamond, but one in every four B-N bonds can be viewed as a coordinate covalent bond, wherein two electrons are donated by the nitrogen atom which acts as the Lewis base to a bond to the Lewis acidic boron(III) centre. Cubic boron nitride, among other applications, is used as an abrasive, as it has a hardness comparable with diamond (the two substances are able to produce scratches on each other). In the BN compound analogue of graphite, hexagonal boron nitride (h-BN), the positively charged boron and negatively charged nitrogen atoms in each plane lie adjacent to the oppositely charged atom in the next plane. Consequently, graphite and h-BN have very different properties, although both are lubricants, as these planes slip past each other easily. However, h-BN is a relatively poor electrical and thermal conductor in the planar directions.
Organoboron chemistry
A large number of organoboron compounds are known and many are useful in organic synthesis. Many are produced from hydroboration, which employs diborane, B2H6, a simple borane chemical, or carboboration. Organoboron(III) compounds are usually tetrahedral or trigonal planar, for example, tetraphenylborate, [B(C6H5)4]− vs. triphenylborane, B(C6H5)3. However, multiple boron atoms reacting with each other have a tendency to form novel dodecahedral (12-sided) and icosahedral (20-sided) structures composed completely of boron atoms, or with varying numbers of carbon heteroatoms.
Organoboron chemicals have been employed in uses as diverse as boron carbide (see below), a complex very hard ceramic composed of boron-carbon cluster anions and cations, to carboranes, carbon-boron cluster chemistry compounds that can be halogenated to form reactive structures including carborane acid, a superacid. As one example, carboranes form useful molecular moieties that add considerable amounts of boron to other biochemicals in order to synthesize boron-containing compounds for boron neutron capture therapy for cancer.
Compounds of B(I) and B(II)
As anticipated by its hydride clusters, boron forms a variety of stable compounds with formal oxidation state less than three. B2F4 and B4Cl4 are well characterized.
Binary metal-boron compounds, the metal borides, contain boron in negative oxidation states. Illustrative is magnesium diboride (MgB2). Each boron atom has a formal −1 charge and magnesium is assigned a formal charge of +2. In this material, the boron centers are trigonal planar with an extra double bond for each boron, forming sheets akin to the carbon in graphite. However, unlike hexagonal boron nitride, which lacks electrons in the plane of the covalent atoms, the delocalized electrons in magnesium diboride allow it to conduct electricity similar to isoelectronic graphite. In 2001, this material was found to be a high-temperature superconductor. It is a superconductor under active development. A project at CERN to make MgB2 cables has resulted in superconducting test cables able to carry 20,000 amperes for extremely high current distribution applications, such as the contemplated high luminosity version of the Large Hadron Collider.
Certain other metal borides find specialized applications as hard materials for cutting tools. Often the boron in borides has fractional oxidation states, such as −1/3 in calcium hexaboride (CaB6).
From the structural perspective, the most distinctive chemical compounds of boron are the hydrides. Included in this series are the cluster compounds dodecaborate (), decaborane (B10H14), and the carboranes such as C2B10H12. Characteristically such compounds contain boron with coordination numbers greater than four.
Isotopes
Boron has two naturally occurring and stable isotopes, 11B (80.1%) and 10B (19.9%). The mass difference results in a wide range of δ11B values, which are defined as a fractional difference between the 11B and 10B and traditionally expressed in parts per thousand, in natural waters ranging from −16 to +59. There are 13 known isotopes of boron; the shortest-lived isotope is 7B which decays through proton emission and alpha decay with a half-life of 3.5×10−22 s. Isotopic fractionation of boron is controlled by the exchange reactions of the boron species B(OH)3 and [B(OH)4]−. Boron isotopes are also fractionated during mineral crystallization, during H2O phase changes in hydrothermal systems, and during hydrothermal alteration of rock. The latter effect results in preferential removal of the [10B(OH)4]− ion onto clays. It results in solutions enriched in 11B(OH)3 and therefore may be responsible for the large 11B enrichment in seawater relative to both oceanic crust and continental crust; this difference may act as an isotopic signature.
The exotic 17B exhibits a nuclear halo, i.e. its radius is appreciably larger than that predicted by the liquid drop model.
The 10B isotope is useful for capturing thermal neutrons (see neutron cross section#Typical cross sections). The nuclear industry enriches natural boron to nearly pure 10B. The less-valuable by-product, depleted boron, is nearly pure 11B.
Commercial isotope enrichment
Because of its high neutron cross-section, boron-10 is often used to control fission in nuclear reactors as a neutron-capturing substance. Several industrial-scale enrichment processes have been developed; however, only the fractionated vacuum distillation of the dimethyl ether adduct of boron trifluoride (DME-BF3) and column chromatography of borates are being used.
Enriched boron (boron-10)
Enriched boron or 10B is used in both radiation shielding and is the primary nuclide used in neutron capture therapy of cancer. In the latter ("boron neutron capture therapy" or BNCT), a compound containing 10B is incorporated into a pharmaceutical which is selectively taken up by a malignant tumor and tissues near it. The patient is then treated with a beam of low energy neutrons at a relatively low neutron radiation dose. The neutrons, however, trigger energetic and short-range secondary alpha particle and lithium-7 heavy ion radiation that are products of the boron-neutron nuclear reaction, and this ion radiation additionally bombards the tumor, especially from inside the tumor cells.
In nuclear reactors, 10B is used for reactivity control and in emergency shutdown systems. It can serve either function in the form of borosilicate control rods or as boric acid. In pressurized water reactors, 10B boric acid is added to the reactor coolant when the plant is shut down for refueling. It is then slowly filtered out over many months as fissile material is used up and the fuel becomes less reactive.
In future crewed interplanetary spacecraft, 10B has a theoretical role as structural material (as boron fibers or BN nanotube material) which would also serve a special role in the radiation shield. One of the difficulties in dealing with cosmic rays, which are mostly high energy protons, is that some secondary radiation from interaction of cosmic rays and spacecraft materials is high energy spallation neutrons. Such neutrons can be moderated by materials high in light elements, such as polyethylene, but the moderated neutrons continue to be a radiation hazard unless actively absorbed in the shielding. Among light elements that absorb thermal neutrons, 6Li and 10B appear as potential spacecraft structural materials which serve both for mechanical reinforcement and radiation protection.
Depleted boron (boron-11)
Radiation-hardened semiconductors
Cosmic radiation will produce secondary neutrons if it hits spacecraft structures. Those neutrons will be captured in 10B, if it is present in the spacecraft's semiconductors, producing a gamma ray, an alpha particle, and a lithium ion. Those resultant decay products may then irradiate nearby semiconductor "chip" structures, causing data loss (bit flipping, or single event upset). In radiation-hardened semiconductor designs, one countermeasure is to use depleted boron, which is greatly enriched in 11B and contains almost no 10B. This is useful because 11B is largely immune to radiation damage. Depleted boron is a byproduct of the nuclear industry (see above).
Proton-boron fusion
11B is also a candidate as a fuel for aneutronic fusion. When struck by a proton with energy of about 500 keV, it produces three alpha particles and 8.7 MeV of energy. Most other fusion reactions involving hydrogen and helium produce penetrating neutron radiation, which weakens reactor structures and induces long-term radioactivity, thereby endangering operating personnel. The alpha particles from 11B fusion can be turned directly into electric power, and all radiation stops as soon as the reactor is turned off.
NMR spectroscopy
Both 10B and 11B possess nuclear spin. The nuclear spin of 10B is 3 and that of 11B is . These isotopes are, therefore, of use in nuclear magnetic resonance spectroscopy; and spectrometers specially adapted to detecting the boron-11 nuclei are available commercially. The 10B and 11B nuclei also cause splitting in the resonances of attached nuclei.
Occurrence
Boron is rare in the Universe and solar system due to trace formation in the Big Bang and in stars. It is formed in minor amounts in cosmic ray spallation nucleosynthesis and may be found uncombined in cosmic dust and meteoroid materials.
In the high oxygen environment of Earth, boron is always found fully oxidized to borate. Boron does not appear on Earth in elemental form. Extremely small traces of elemental boron were detected in Lunar regolith.
Although boron is a relatively rare element in the Earth's crust, representing only 0.001% of the crust mass, it can be highly concentrated by the action of water, in which many borates are soluble.
It is found naturally combined in compounds such as borax and boric acid (sometimes found in volcanic spring waters). About a hundred borate minerals are known.
On 5 September 2017, scientists reported that the Curiosity rover detected boron, an essential ingredient for life on Earth, on the planet Mars. Such a finding, along with previous discoveries that water may have been present on ancient Mars, further supports the possible early habitability of Gale Crater on Mars.
Production
Economically important sources of boron are the minerals colemanite, rasorite (kernite), ulexite and tincal. Together these constitute 90% of mined boron-containing ore. The largest global borax deposits known, many still untapped, are in Central and Western Turkey, including the provinces of Eskişehir, Kütahya and Balıkesir. Global proven boron mineral mining reserves exceed one billion metric tonnes, against a yearly production of about four million tonnes.
Turkey and the United States are the largest producers of boron products. Turkey produces about half of the global yearly demand, through Eti Mine Works () a Turkish state-owned mining and chemicals company focusing on boron products. It holds a government monopoly on the mining of borate minerals in Turkey, which possesses 72% of the world's known deposits. In 2012, it held a 47% share of production of global borate minerals, ahead of its main competitor, Rio Tinto Group.
Almost a quarter (23%) of global boron production comes from the single Rio Tinto Borax Mine (also known as the U.S. Borax Boron Mine) near Boron, California.
Market trend
The average cost of crystalline elemental boron is US$5/g. Elemental boron is chiefly used in making boron fibers, where it is deposited by chemical vapor deposition on a tungsten core (see below). Boron fibers are used in lightweight composite applications, such as high strength tapes. This use is a very small fraction of total boron use. Boron is introduced into semiconductors as boron compounds, by ion implantation.
Estimated global consumption of boron (almost entirely as boron compounds) was about 4 million tonnes of B2O3 in 2012. As compounds such as borax and kernite its cost was US$377/tonne in 2019. Boron mining and refining capacities are considered to be adequate to meet expected levels of growth through the next decade.
The form in which boron is consumed has changed in recent years. The use of ores like colemanite has declined following concerns over arsenic content. Consumers have moved toward the use of refined borates and boric acid that have a lower pollutant content.
Increasing demand for boric acid has led a number of producers to invest in additional capacity. Turkey's state-owned Eti Mine Works opened a new boric acid plant with the production capacity of 100,000 tonnes per year at Emet in 2003. Rio Tinto Group increased the capacity of its boron plant from 260,000 tonnes per year in 2003 to 310,000 tonnes per year by May 2005, with plans to grow this to 366,000 tonnes per year in 2006. Chinese boron producers have been unable to meet rapidly growing demand for high quality borates. This has led to imports of sodium tetraborate (borax) growing by a hundredfold between 2000 and 2005 and boric acid imports increasing by 28% per year over the same period.
The rise in global demand has been driven by high growth rates in glass fiber, fiberglass and borosilicate glassware production. A rapid increase in the manufacture of reinforcement-grade boron-containing fiberglass in Asia, has offset the development of boron-free reinforcement-grade fiberglass in Europe and the US. The recent rises in energy prices may lead to greater use of insulation-grade fiberglass, with consequent growth in the boron consumption. Roskill Consulting Group forecasts that world demand for boron will grow by 3.4% per year to reach 21 million tonnes by 2010. The highest growth in demand is expected to be in Asia where demand could rise by an average 5.7% per year.
Applications
Nearly all boron ore extracted from the Earth is destined for refinement into boric acid and sodium tetraborate pentahydrate. In the United States, 70% of the boron is used for the production of glass and ceramics.
The major global industrial-scale use of boron compounds (about 46% of end-use) is in production of glass fiber for boron-containing insulating and structural fiberglasses, especially in Asia. Boron is added to the glass as borax pentahydrate or boron oxide, to influence the strength or fluxing qualities of the glass fibers. Another 10% of global boron production is for borosilicate glass as used in high strength glassware. About 15% of global boron is used in boron ceramics, including super-hard materials discussed below. Agriculture consumes 11% of global boron production, and bleaches and detergents about 6%.
Elemental boron fiber
Boron fibers (boron filaments) are high-strength, lightweight materials that are used chiefly for advanced aerospace structures as a component of composite materials, as well as limited production consumer and sporting goods such as golf clubs and fishing rods. The fibers can be produced by chemical vapor deposition of boron on a tungsten filament.
Boron fibers and sub-millimeter sized crystalline boron springs are produced by laser-assisted chemical vapor deposition. Translation of the focused laser beam allows production of even complex helical structures. Such structures show good mechanical properties (elastic modulus 450 GPa, fracture strain 3.7%, fracture stress 17 GPa) and can be applied as reinforcement of ceramics or in micromechanical systems.
Boronated fiberglass
Fiberglass is a fiber reinforced polymer made of plastic reinforced by glass fibers, commonly woven into a mat. The glass fibers used in the material are made of various types of glass depending upon the fiberglass use. These glasses all contain silica or silicate, with varying amounts of oxides of calcium, magnesium, and sometimes boron. The boron is present as borosilicate, borax, or boron oxide, and is added to increase the strength of the glass, or as a fluxing agent to decrease the melting temperature of silica, which is too high to be easily worked in its pure form to make glass fibers.
The highly boronated glasses used in fiberglass are E-glass (named for "Electrical" use, but now the most common fiberglass for general use). E-glass is alumino-borosilicate glass with less than 1% w/w alkali oxides, mainly used for glass-reinforced plastics. Other common high-boron glasses include C-glass, an alkali-lime glass with high boron oxide content, used for glass staple fibers and insulation, and D-glass, a borosilicate glass, named for its low dielectric constant.
Not all fiberglasses contain boron, but on a global scale, most of the fiberglass used does contain it. Because of the ubiquitous use of fiberglass in construction and insulation, boron-containing fiberglasses consume half the global production of boron, and are the single largest commercial boron market.
Borosilicate glass
Borosilicate glass, which is typically 12–15% B2O3, 80% SiO2, and 2% Al2O3, has a low coefficient of thermal expansion, giving it a good resistance to thermal shock. Schott AG's "Duran" and Owens-Corning's trademarked Pyrex are two major brand names for this glass, used both in laboratory glassware and in consumer cookware and bakeware, chiefly for this resistance.
Boron carbide ceramic
Several boron compounds are known for their extreme hardness and toughness.
Boron carbide is a ceramic material which is obtained by decomposing B2O3 with carbon in an electric furnace:
2 B2O3 + 7 C → B4C + 6 CO
Boron carbide's structure is only approximately B4C, and it shows a clear depletion of carbon from this suggested stoichiometric ratio. This is due to its very complex structure. The substance can be seen with empirical formula B12C3 (i.e., with B12 dodecahedra being a motif), but with less carbon, as the suggested C3 units are replaced with C-B-C chains, and some smaller (B6) octahedra are present as well (see the boron carbide article for structural analysis). The repeating polymer plus semi-crystalline structure of boron carbide gives it great structural strength per weight. It is used in tank armor, bulletproof vests, and numerous other structural applications.
Boron carbide's ability to absorb neutrons without forming long-lived radionuclides (especially when doped with extra boron-10) makes the material attractive as an absorbent for neutron radiation arising in nuclear power plants. Nuclear applications of boron carbide include shielding, control rods and shut-down pellets. Within control rods, boron carbide is often powdered, to increase its surface area.
High-hardness and abrasive compounds
Boron carbide and cubic boron nitride powders are widely used as abrasives. Boron nitride is a material isoelectronic to carbon. Similar to carbon, it has both hexagonal (soft graphite-like h-BN) and cubic (hard, diamond-like c-BN) forms. h-BN is used as a high temperature component and lubricant. c-BN, also known under commercial name borazon, is a superior abrasive. Its hardness is only slightly smaller than, but its chemical stability is superior, to that of diamond. Heterodiamond (also called BCN) is another diamond-like boron compound.
Metallurgy
Boron is added to boron steels at the level of a few parts per million to increase hardenability. Higher percentages are added to steels used in the nuclear industry due to boron's neutron absorption ability.
Boron can also increase the surface hardness of steels and alloys through boriding. Additionally metal borides are used for coating tools through chemical vapor deposition or physical vapor deposition. Implantation of boron ions into metals and alloys, through ion implantation or ion beam deposition, results in a spectacular increase in surface resistance and microhardness. Laser alloying has also been successfully used for the same purpose. These borides are an alternative to diamond coated tools, and their (treated) surfaces have similar properties to those of the bulk boride.
For example, rhenium diboride can be produced at ambient pressures, but is rather expensive because of rhenium. The hardness of ReB2 exhibits considerable anisotropy because of its hexagonal layered structure. Its value is comparable to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride.
Similarly, AlMgB14 + TiB2 composites possess high hardness and wear resistance and are used in either bulk form or as coatings for components exposed to high temperatures and wear loads.
Detergent formulations and bleaching agents
Borax is used in various household laundry and cleaning products, including the "20 Mule Team Borax" laundry booster and "Boraxo" powdered hand soap. It is also present in some tooth bleaching formulas.
Sodium perborate serves as a source of active oxygen in many detergents, laundry detergents, cleaning products, and laundry bleaches. However, despite its name, "Borateem" laundry bleach no longer contains any boron compounds, using sodium percarbonate instead as a bleaching agent.
Insecticides
Boric acid is used as an insecticide, notably against ants, fleas, and cockroaches.
Semiconductors
Boron is a useful dopant for such semiconductors as silicon, germanium, and silicon carbide. Having one fewer valence electron than the host atom, it donates a hole resulting in p-type conductivity. Traditional method of introducing boron into semiconductors is via its atomic diffusion at high temperatures. This process uses either solid (B2O3), liquid (BBr3), or gaseous boron sources (B2H6 or BF3). However, after the 1970s, it was mostly replaced by ion implantation, which relies mostly on BF3 as a boron source. Boron trichloride gas is also an important chemical in semiconductor industry, however, not for doping but rather for plasma etching of metals and their oxides. Triethylborane is also injected into vapor deposition reactors as a boron source. Examples are the plasma deposition of boron-containing hard carbon films, silicon nitride–boron nitride films, and for doping of diamond film with boron.
Magnets
Boron is a component of neodymium magnets (Nd2Fe14B), which are among the strongest type of permanent magnet. These magnets are found in a variety of electromechanical and electronic devices, such as magnetic resonance imaging (MRI) medical imaging systems, in compact and relatively small motors and actuators. As examples, computer HDDs (hard disk drives), CD (compact disk) and DVD (digital versatile disk) players rely on neodymium magnet motors to deliver intense rotary power in a remarkably compact package. In mobile phones 'Neo' magnets provide the magnetic field which allows tiny speakers to deliver appreciable audio power.
Shielding and neutron absorber in nuclear reactors
Boron shielding is used as a control for nuclear reactors, taking advantage of its high cross-section for neutron capture.
In pressurized water reactors a variable concentration of boronic acid in the cooling water is used as a neutron poison to compensate the variable reactivity of the fuel. When new rods are inserted the concentration of boronic acid is maximal, and is reduced during the lifetime.
Other nonmedical uses
Because of its distinctive green flame, amorphous boron is used in pyrotechnic flares.
In the 1950s, there were several studies of the use of boranes as energy-increasing "Zip fuel" additives for jet fuel.
Starch and casein-based adhesives contain sodium tetraborate decahydrate (Na2B4O7·10 H2O)
Some anti-corrosion systems contain borax.
Sodium borates are used as a flux for soldering silver and gold and with ammonium chloride for welding ferrous metals. They are also fire retarding additives to plastics and rubber articles.
Boric acid (also known as orthoboric acid) H3BO3 is used in the production of textile fiberglass and flat panel displays and in many PVAc- and PVOH-based adhesives.
Triethylborane is a substance which ignites the JP-7 fuel of the Pratt & Whitney J58 turbojet/ramjet engines powering the Lockheed SR-71 Blackbird. It was also used to ignite the F-1 Engines on the Saturn V Rocket utilized by NASA's Apollo and Skylab programs from 1967 until 1973. Today SpaceX uses it to ignite the engines on their Falcon 9 rocket. Triethylborane is suitable for this because of its pyrophoric properties, especially the fact that it burns with a very high temperature. Triethylborane is an industrial initiator in radical reactions, where it is effective even at low temperatures.
Borates are used as environmentally benign wood preservatives.
Pharmaceutical and biological applications
Boron plays a role in pharmaceutical and biological applications as it is found in various bacteria-produced antibiotics, such as boromycins, aplasmomycins, borophycins, and tartrolons. These antibiotics have shown inhibitory effects on certain bacteria, fungi, and protozoa growth. Boron is also being studied for its potential medicinal applications, including its incorporation into biologically active molecules for therapies like boron neutron capture therapy for brain tumors. Some boron-containing biomolecules may act as signaling molecules interacting with cell surfaces, suggesting a role in cellular communication.
Boric acid has antiseptic, antifungal, and antiviral properties and, for these reasons, is applied as a water clarifier in swimming pool water treatment. Mild solutions of boric acid have been used as eye antiseptics.
Bortezomib (marketed as Velcade and Cytomib). Boron appears as an active element in the organic pharmaceutical bortezomib, a new class of drug called the proteasome inhibitor, for treating myeloma and one form of lymphoma (it is currently in experimental trials against other types of lymphoma). The boron atom in bortezomib binds the catalytic site of the 26S proteasome with high affinity and specificity.
A number of potential boronated pharmaceuticals using boron-10, have been prepared for use in boron neutron capture therapy (BNCT).
Some boron compounds show promise in treating arthritis, though none have as yet been generally approved for the purpose.
Tavaborole (marketed as Kerydin) is an Aminoacyl tRNA synthetase inhibitor which is used to treat toenail fungus. It gained FDA approval in July 2014.
Dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies or red blood cells, which allows for positron emission tomography (PET) imaging of cancer and hemorrhages, respectively. A Human-Derived, Genetic, Positron-emitting and Fluorescent (HD-GPF) reporter system uses a human protein, PSMA and non-immunogenic, and a small molecule that is positron-emitting (boron bound 18F) and fluorescence for dual modality PET and fluorescent imaging of genome modified cells, e.g. cancer, CRISPR/Cas9, or CAR T-cells, in an entire mouse. The dual-modality small molecule targeting PSMA was tested in humans and found the location of primary and metastatic prostate cancer, fluorescence-guided removal of cancer, and detects single cancer cells in tissue margins.
In neutron capture therapy (BNCT) for malignant brain tumors, boron is researched to be used for selectively targeting and destroying tumor cells. The goal is to deliver higher concentrations of the non-radioactive boron isotope (10B) to the tumor cells than to the surrounding normal tissues. When these 10B-containing cells are irradiated with low-energy thermal neutrons, they undergo nuclear capture reactions, releasing high linear energy transfer (LET) particles such as α-particles and lithium-7 nuclei within a limited path length. These high-LET particles can destroy the adjacent tumor cells without causing significant harm to nearby normal cells. Boron acts as a selective agent due to its ability to absorb thermal neutrons and produce short-range physical effects primarily affecting the targeted tissue region. This binary approach allows for precise tumor cell killing while sparing healthy tissues. The effective delivery of boron involves administering boron compounds or carriers capable of accumulating selectively in tumor cells compared to surrounding tissue. BSH and BPA have been used clinically, but research continues to identify more optimal carriers. Accelerator-based neutron sources have also been developed recently as an alternative to reactor-based sources, leading to improved efficiency and enhanced clinical outcomes in BNCT. By employing the properties of boron isotopes and targeted irradiation techniques, BNCT offers a potential approach to treating malignant brain tumors by selectively killing cancer cells while minimizing the damage caused by traditional radiation therapies.
BNCT has shown promising results in clinical trials for various other malignancies, including glioblastoma, head and neck cancer, cutaneous melanoma, hepatocellular carcinoma, lung cancer, and extramammary Paget's disease. The treatment involves a nuclear reaction between nonradioactive boron-10 isotope and low-energy thermal or high-energy epithermal neutrons to generate α particles and lithium nuclei that selectively destroy DNA in tumor cells. The primary challenge lies in developing efficient boron agents with higher content and specific targeting properties tailored for BNCT. Integration of tumor-targeting strategies with BNCT could potentially establish it as a practical personalized treatment option for different types of cancers. Ongoing research explores new boron compounds, optimization strategies, theranostic agents, and radiobiological advances to overcome limitations and cost-effectively improve patient outcomes.
Research areas
Magnesium diboride is an important superconducting material with the transition temperature of 39 K. MgB2 wires are produced with the powder-in-tube process and applied in superconducting magnets.
Amorphous boron is used as a melting point depressant in nickel-chromium braze alloys.
Hexagonal boron nitride forms atomically thin layers, which have been used to enhance the electron mobility in graphene devices. It also forms nanotubular structures (BNNTs), which have high strength, high chemical stability, and high thermal conductivity, among its list of desirable properties.
Boron has multiple applications in nuclear fusion research. It is commonly used for conditioning the walls in fusion reactors by depositing boron coatings on plasma-facing components and walls to reduce the release of hydrogen and impurities from the surfaces. It is also being used for the dissipation of energy in the fusion plasma boundary to suppress excessive energy bursts and heat fluxes to the walls.
Biological role
Boron is an essential plant nutrient, required primarily for maintaining the integrity of cell walls. However, high soil concentrations of greater than 1.0 ppm lead to marginal and tip necrosis in leaves as well as poor overall growth performance. Levels as low as 0.8 ppm produce these same symptoms in plants that are particularly sensitive to boron in the soil. Nearly all plants, even those somewhat tolerant of soil boron, will show at least some symptoms of boron toxicity when soil boron content is greater than 1.8 ppm. When this content exceeds 2.0 ppm, few plants will perform well and some may not survive.
It is thought that boron plays several essential roles in animals, including humans, but the exact physiological role is poorly understood. A small human trial published in 1987 reported on postmenopausal women first made boron deficient and then repleted with 3 mg/day. Boron supplementation markedly reduced urinary calcium excretion and elevated the serum concentrations of 17 beta-estradiol and testosterone.
Boron is not classified as an essential human nutrient because research has not established a clear biological function for boron. Still, studies suggest that boron may exert beneficial effects on reproduction and development, calcium metabolism, bone formation, brain function, insulin and energy substrate metabolism, immunity, and steroid hormone (including estrogen) and vitamin D function, among other functions. The U.S. Food and Nutrition Board (FNB) found the existing data insufficient to derive a Recommended Dietary Allowance (RDA), Adequate Intake (AI), or Estimated Average Requirement (EAR) for boron. The U.S. Food and Drug Administration (FDA) has not established a Daily Value for boron for food and dietary supplement labeling purposes. While low boron status can be detrimental to health, probably increasing the risk of osteoporosis, poor immune function, and cognitive decline; high boron levels are associated with cell damage and toxicity. The exact mechanism by which boron exerts its physiological effects is not fully understood, but may involve interactions with adenosine monophosphate (ADP) and S-adenosyl methionine (SAM-e), two compounds involved in important cellular functions. Furthermore, boron appears to inhibit cyclic ADP-ribose, thereby affecting the release of calcium ions from the endoplasmic reticulum and affecting various biological processes. Some studies suggest that boron may reduce levels of inflammatory biomarkers.
In humans, boron is usually consumed with food that contains boron, such as fruits, leafy vegetables, and nuts. Foods that are particularly rich in boron include avocados, dried fruits such as raisins, peanuts, pecans, prune juice, grape juice, wine and chocolate powder. According to 2-day food records from the respondents to the Third National Health and Nutrition Examination Survey (NHANES III), adult dietary intake was recorded at 0.9 to 1.4 mg/day.
In 2013, a hypothesis suggested it was possible that boron and molybdenum catalyzed the production of RNA on Mars with life being transported to Earth via a meteorite around 3 billion years ago.
There exist several known boron-containing natural antibiotics. The first one found was boromycin, isolated from streptomyces in the 1960s. Others are tartrolons, a group of antibiotics discovered in the 1990s from culture broth of the myxobacterium Sorangium cellulosum.
Congenital endothelial dystrophy type 2, a rare form of corneal dystrophy, is linked to mutations in SLC4A11 gene that encodes a transporter reportedly regulating the intracellular concentration of boron.
Analytical quantification
For determination of boron content in food or materials, the colorimetric curcumin method is used. Boron is converted to boric acid or borates and on reaction with curcumin in acidic solution, a red colored boron-chelate complex, rosocyanine, is formed.
Health issues and toxicity
Elemental boron, boron oxide, boric acid, borates, and many organoboron compounds are relatively nontoxic to humans and animals (with toxicity similar to that of table salt). The LD50 (dose at which there is 50% mortality) for animals is about 6 g per kg of body weight. Substances with LD50 above 2 g/kg are considered nontoxic. An intake of 4 g/day of boric acid was reported without incident, but more than this is considered toxic in more than a few doses. Intakes of more than 0.5 grams per day for 50 days cause minor digestive and other problems suggestive of toxicity. Dietary supplementation of boron may be helpful for bone growth, wound healing, and antioxidant activity, and insufficient amount of boron in diet may result in boron deficiency.
Single medical doses of 20 g of boric acid for neutron capture therapy have been used without undue toxicity.
Boric acid is more toxic to insects than to mammals, and is routinely used as an insecticide.
The boranes (boron hydrogen compounds) and similar gaseous compounds are quite poisonous. As usual, boron is not an element that is intrinsically poisonous, but the toxicity of these compounds depends on structure (for another example of this phenomenon, see phosphine). The boranes are also highly flammable and require special care when handling, some combinations of boranes and other compounds are highly explosive. Sodium borohydride presents a fire hazard owing to its reducing nature and the liberation of hydrogen on contact with acid. Boron halides are corrosive.
Boron is necessary for plant growth, but an excess of boron is toxic to plants, and occurs particularly in acidic soil. It presents as a yellowing from the tip inwards of the oldest leaves and black spots in barley leaves, but it can be confused with other stresses such as magnesium deficiency in other plants.
See also
Allotropes of boron
Boron deficiency
Boron oxide
Boron nitride
Boron neutron capture therapy
Boronic acid
Hydroboration-oxidation reaction
Suzuki coupling
References
External links
Boron at The Periodic Table of Videos (University of Nottingham)
J. B. Calvert: Boron, 2004, private website (archived version)
Chemical elements
Metalloids
Neutron poisons
Pyrotechnic fuels
Rocket fuels
Nuclear fusion fuels
Dietary minerals
Reducing agents
Articles containing video clips
Chemical elements with rhombohedral structure
|
https://en.wikipedia.org/wiki/Bromine
|
Bromine is a chemical element with the symbol Br and atomic number 35. It is a volatile red-brown liquid at room temperature that evaporates readily to form a similarly coloured vapour. Its properties are intermediate between those of chlorine and iodine. Isolated independently by two chemists, Carl Jacob Löwig (in 1825) and Antoine Jérôme Balard (in 1826), its name was derived from the Ancient Greek (bromos) meaning "stench", referring to its sharp and pungent smell.
Elemental bromine is very reactive and thus does not occur as a free element in nature. Instead, it can be isolated from colourless soluble crystalline mineral halide salts analogous to table salt, a property it shares with the other halogens. While it is rather rare in the Earth's crust, the high solubility of the bromide ion (Br) has caused its accumulation in the oceans. Commercially the element is easily extracted from brine evaporation ponds, mostly in the United States and Israel. The mass of bromine in the oceans is about one three-hundredth that of chlorine.
At standard conditions for temperature and pressure it is a liquid; the only other element that is liquid under these conditions is mercury. At high temperatures, organobromine compounds readily dissociate to yield free bromine atoms, a process that stops free radical chemical chain reactions. This effect makes organobromine compounds useful as fire retardants, and more than half the bromine produced worldwide each year is put to this purpose. The same property causes ultraviolet sunlight to dissociate volatile organobromine compounds in the atmosphere to yield free bromine atoms, causing ozone depletion. As a result, many organobromine compounds—such as the pesticide methyl bromide—are no longer used. Bromine compounds are still used in well drilling fluids, in photographic film, and as an intermediate in the manufacture of organic chemicals.
Large amounts of bromide salts are toxic from the action of soluble bromide ions, causing bromism. However, bromine is beneficial for human eosinophils, and is an essential trace element for collagen development in all animals. Hundreds of known organobromine compounds are generated by terrestrial and marine plants and animals, and some serve important biological roles. As a pharmaceutical, the simple bromide ion (Br) has inhibitory effects on the central nervous system, and bromide salts were once a major medical sedative, before replacement by shorter-acting drugs. They retain niche uses as antiepileptics.
History
Bromine was discovered independently by two chemists, Carl Jacob Löwig and Antoine Balard, in 1825 and 1826, respectively.
Löwig isolated bromine from a mineral water spring from his hometown Bad Kreuznach in 1825. Löwig used a solution of the mineral salt saturated with chlorine and extracted the bromine with diethyl ether. After evaporation of the ether, a brown liquid remained. With this liquid as a sample of his work he applied for a position in the laboratory of Leopold Gmelin in Heidelberg. The publication of the results was delayed and Balard published his results first.
Balard found bromine chemicals in the ash of seaweed from the salt marshes of Montpellier. The seaweed was used to produce iodine, but also contained bromine. Balard distilled the bromine from a solution of seaweed ash saturated with chlorine. The properties of the resulting substance were intermediate between those of chlorine and iodine; thus he tried to prove that the substance was iodine monochloride (ICl), but after failing to do so he was sure that he had found a new element and named it muride, derived from the Latin word ("brine").
After the French chemists Louis Nicolas Vauquelin, Louis Jacques Thénard, and Joseph-Louis Gay-Lussac approved the experiments of the young pharmacist Balard, the results were presented at a lecture of the Académie des Sciences and published in Annales de Chimie et Physique. In his publication, Balard stated that he changed the name from muride to brôme on the proposal of M. Anglada. The name brôme (bromine) derives from the Greek (, "stench"). Other sources claim that the French chemist and physicist Joseph-Louis Gay-Lussac suggested the name brôme for the characteristic smell of the vapors. Bromine was not produced in large quantities until 1858, when the discovery of salt deposits in Stassfurt enabled its production as a by-product of potash.
Apart from some minor medical applications, the first commercial use was the daguerreotype. In 1840, bromine was discovered to have some advantages over the previously used iodine vapor to create the light sensitive silver halide layer in daguerreotypy.
Potassium bromide and sodium bromide were used as anticonvulsants and sedatives in the late 19th and early 20th centuries, but were gradually superseded by chloral hydrate and then by the barbiturates. In the early years of the First World War, bromine compounds such as xylyl bromide were used as poison gas.
Properties
Bromine is the third halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to those of fluorine, chlorine, and iodine, and tend to be intermediate between those of the two neighbouring halogens, chlorine, and iodine. Bromine has the electron configuration [Ar]4s3d4p, with the seven electrons in the fourth and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between chlorine and iodine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than chlorine and more reactive than iodine. It is also a weaker oxidising agent than chlorine, but a stronger one than iodine. Conversely, the bromide ion is a weaker reducing agent than iodide, but a stronger one than chloride. These similarities led to chlorine, bromine, and iodine together being classified as one of the original triads of Johann Wolfgang Döbereiner, whose work foreshadowed the periodic law for chemical elements. It is intermediate in atomic radius between chlorine and iodine, and this leads to many of its atomic properties being similarly intermediate in value between chlorine and iodine, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X molecule (X = Cl, Br, I), ionic radius, and X–X bond length. The volatility of bromine accentuates its very penetrating, choking, and unpleasant odour.
All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of bromine are intermediate between those of chlorine and iodine. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of bromine are again intermediate between those of chlorine and iodine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: fluorine is a very pale yellow gas, chlorine is greenish-yellow, and bromine is a reddish-brown volatile liquid that melts at −7.2 °C and boils at 58.8 °C. (Iodine is a shiny black solid.) This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as bromine, results from the electron transition between the highest occupied antibonding π molecular orbital and the lowest vacant antibonding σ molecular orbital. The colour fades at low temperatures so that solid bromine at −195 °C is pale yellow.
Like solid chlorine and iodine, solid bromine crystallises in the orthorhombic crystal system, in a layered arrangement of Br molecules. The Br–Br distance is 227 pm (close to the gaseous Br–Br distance of 228 pm) and the Br···Br distance between molecules is 331 pm within a layer and 399 pm between layers (compare the van der Waals radius of bromine, 195 pm). This structure means that bromine is a very poor conductor of electricity, with a conductivity of around 5 × 10 Ω cm just below the melting point, although this is higher than the essentially undetectable conductivity of chlorine.
At a pressure of 55 GPa (roughly 540,000 times atmospheric pressure) bromine undergoes an insulator-to-metal transition. At 75 GPa it changes to a face-centered orthorhombic structure. At 100 GPa it changes to a body centered orthorhombic monatomic form.
Isotopes
Bromine has two stable isotopes, Br and Br. These are its only two natural isotopes, with Br making up 51% of natural bromine and Br making up the remaining 49%. Both have nuclear spin 3/2− and thus may be used for nuclear magnetic resonance, although Br is more favourable. The relatively 1:1 distribution of the two isotopes in nature is helpful in identification of bromine containing compounds using mass spectroscopy. Other bromine isotopes are all radioactive, with half-lives too short to occur in nature. Of these, the most important are Br (t = 17.7 min), Br (t = 4.421 h), and Br (t = 35.28 h), which may be produced from the neutron activation of natural bromine. The most stable bromine radioisotope is Br (t = 57.04 h). The primary decay mode of isotopes lighter than Br is electron capture to isotopes of selenium; that of isotopes heavier than Br is beta decay to isotopes of krypton; and Br may decay by either mode to stable Se or Kr. Br isotopes from Br-87 and heavier undergo beta decay with neutron emission and are of practical importance because they are fission products; Br-87 with half-life 55 s is notable as the longest lived delayed neutron emitter.
Chemistry and compounds
Bromine is intermediate in reactivity between chlorine and iodine, and is one of the most reactive elements. Bond energies to bromine tend to be lower than those to chlorine but higher than those to iodine, and bromine is a weaker oxidising agent than chlorine but a stronger one than iodine. This can be seen from the standard electrode potentials of the X/X couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). Bromination often leads to higher oxidation states than iodination but lower or equal oxidation states to chlorination. Bromine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Br bonds.
Hydrogen bromide
The simplest compound of bromine is hydrogen bromide, HBr. It is mainly used in the production of inorganic bromides and alkyl bromides, and as a catalyst for many reactions in organic chemistry. Industrially, it is mainly produced by the reaction of hydrogen gas with bromine gas at 200–400 °C with a platinum catalyst. However, reduction of bromine with red phosphorus is a more practical way to produce hydrogen bromide in the laboratory:
2 P + 6 HO + 3 Br → 6 HBr + 2 HPO
HPO + HO + Br → 2 HBr + HPO
At room temperature, hydrogen bromide is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative bromine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen bromide at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Aqueous hydrogen bromide is known as hydrobromic acid, which is a strong acid (pK = −9) because the hydrogen bonds to bromine are too weak to inhibit dissociation. The HBr/HO system also involves many hydrates HBr·nHO for n = 1, 2, 3, 4, and 6, which are essentially salts of bromine anions and hydronium cations. Hydrobromic acid forms an azeotrope with boiling point 124.3 °C at 47.63 g HBr per 100 g solution; thus hydrobromic acid cannot be concentrated beyond this point by distillation.
Unlike hydrogen fluoride, anhydrous liquid hydrogen bromide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into HBr and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and bromine, though its salts with very large and weakly polarising cations such as Cs and (R = Me, Et, Bu) may still be isolated. Anhydrous hydrogen bromide is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides.
Other binary bromides
Nearly all elements in the periodic table form binary bromides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the very unstable XeBr); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than bromine's (oxygen, nitrogen, fluorine, and chlorine), so that the resultant binary compounds are formally not bromides but rather oxides, nitrides, fluorides, or chlorides of bromine. (Nonetheless, nitrogen tribromide is named as a bromide as it is analogous to the other nitrogen trihalides.)
Bromination of metals with Br tends to yield lower oxidation states than chlorination with Cl when a variety of oxidation states is available. Bromides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrobromic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen bromide gas. These methods work best when the bromide product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative bromination of the element with bromine or hydrogen bromide, high-temperature bromination of a metal oxide or other halide by bromine, a volatile metal bromide, carbon tetrabromide, or an organic bromide. For example, niobium(V) oxide reacts with carbon tetrabromide at 370 °C to form niobium(V) bromide. Another method is halogen exchange in the presence of excess "halogenating reagent", for example:
FeCl + BBr (excess) → FeBr + BCl
When a lower bromide is wanted, either a higher halide may be reduced using hydrogen or a metal as a reducing agent, or thermal decomposition or disproportionation may be used, as follows:
3 WBr + Al 3 WBr + AlBr
EuBr + H → EuBr + HBr
2 TaBr TaBr + TaBr
Most metal bromides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular bromides, as do metals in high oxidation states from +3 and above. Both ionic and covalent bromides are known for metals in oxidation state +3 (e.g. scandium bromide is mostly ionic, but aluminium bromide is not). Silver bromide is very insoluble in water and is thus often used as a qualitative test for bromine.
Bromine halides
The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY, XY, and XY (where X is heavier than Y), and bromine is no exception. Bromine forms a monofluoride and monochloride, as well as a trifluoride and pentafluoride. Some cationic and anionic derivatives are also characterised, such as , , , , and . Apart from these, some pseudohalides are also known, such as cyanogen bromide (BrCN), bromine thiocyanate (BrSCN), and bromine azide (BrN).
The pale-brown bromine monofluoride (BrF) is unstable at room temperature, disproportionating quickly and irreversibly into bromine, bromine trifluoride, and bromine pentafluoride. It thus cannot be obtained pure. It may be synthesised by the direct reaction of the elements, or by the comproportionation of bromine and bromine trifluoride at high temperatures. Bromine monochloride (BrCl), a red-brown gas, quite readily dissociates reversibly into bromine and chlorine at room temperature and thus also cannot be obtained pure, though it can be made by the reversible direct reaction of its elements in the gas phase or in carbon tetrachloride. Bromine monofluoride in ethanol readily leads to the monobromination of the aromatic compounds PhX (para-bromination occurs for X = Me, Bu, OMe, Br; meta-bromination occurs for the deactivating X = –COEt, –CHO, –NO); this is due to heterolytic fission of the Br–F bond, leading to rapid electrophilic bromination by Br.
At room temperature, bromine trifluoride (BrF) is a straw-coloured liquid. It may be formed by directly fluorinating bromine at room temperature and is purified through distillation. It reacts violently with water and explodes on contact with flammable materials, but is a less powerful fluorinating reagent than chlorine trifluoride. It reacts vigorously with boron, carbon, silicon, arsenic, antimony, iodine, and sulfur to give fluorides, and will also convert most metals and many metal compounds to fluorides; as such, it is used to oxidise uranium to uranium hexafluoride in the nuclear power industry. Refractory oxides tend to be only partially fluorinated, but here the derivatives KBrF and BrFSbF remain reactive. Bromine trifluoride is a useful nonaqueous ionising solvent, since it readily dissociates to form and and thus conducts electricity.
Bromine pentafluoride (BrF) was first synthesised in 1930. It is produced on a large scale by direct reaction of bromine with excess fluorine at temperatures higher than 150 °C, and on a small scale by the fluorination of potassium bromide at 25 °C. It also reacts violently with water and is a very strong fluorinating agent, although chlorine trifluoride is still stronger.
Polybromine compounds
Although dibromine is a strong oxidising agent with a high first ionisation energy, very strong oxidisers such as peroxydisulfuryl fluoride (SOF) can oxidise it to form the cherry-red cation. A few other bromine cations are known, namely the brown and dark brown . The tribromide anion, , has also been characterised; it is analogous to triiodide.
Bromine oxides and oxoacids
Bromine oxides are not as well-characterised as chlorine oxides or iodine oxides, as they are all fairly unstable: it was once thought that they could not exist at all. Dibromine monoxide is a dark-brown solid which, while reasonably stable at −60 °C, decomposes at its melting point of −17.5 °C; it is useful in bromination reactions and may be made from the low-temperature decomposition of bromine dioxide in a vacuum. It oxidises iodine to iodine pentoxide and benzene to 1,4-benzoquinone; in alkaline solutions, it gives the hypobromite anion.
So-called "bromine dioxide", a pale yellow crystalline solid, may be better formulated as bromine perbromate, BrOBrO. It is thermally unstable above −40 °C, violently decomposing to its elements at 0 °C. Dibromine trioxide, syn-BrOBrO, is also known; it is the anhydride of hypobromous acid and bromic acid. It is an orange crystalline solid which decomposes above −40 °C; if heated too rapidly, it explodes around 0 °C. A few other unstable radical oxides are also known, as are some poorly characterised oxides, such as dibromine pentoxide, tribromine octoxide, and bromine trioxide.
The four oxoacids, hypobromous acid (HOBr), bromous acid (HOBrO), bromic acid (HOBrO), and perbromic acid (HOBrO), are better studied due to their greater stability, though they are only so in aqueous solution. When bromine dissolves in aqueous solution, the following reactions occur:
{|
|-
| Br + HO || HOBr + H + Br || K = 7.2 × 10 mol l
|-
| Br + 2 OH || OBr + HO + Br || K = 2 × 10 mol l
|}
Hypobromous acid is unstable to disproportionation. The hypobromite ions thus formed disproportionate readily to give bromide and bromate:
{|
|-
| 3 BrO 2 Br + || K = 10
|}
Bromous acids and bromites are very unstable, although the strontium and barium bromites are known. More important are the bromates, which are prepared on a small scale by oxidation of bromide by aqueous hypochlorite, and are strong oxidising agents. Unlike chlorates, which very slowly disproportionate to chloride and perchlorate, the bromate anion is stable to disproportionation in both acidic and aqueous solutions. Bromic acid is a strong acid. Bromides and bromates may comproportionate to bromine as follows:
+ 5 Br + 6 H → 3 Br + 3 HO
There were many failed attempts to obtain perbromates and perbromic acid, leading to some rationalisations as to why they should not exist, until 1968 when the anion was first synthesised from the radioactive beta decay of unstable . Today, perbromates are produced by the oxidation of alkaline bromate solutions by fluorine gas. Excess bromate and fluoride are precipitated as silver bromate and calcium fluoride, and the perbromic acid solution may be purified. The perbromate ion is fairly inert at room temperature but is thermodynamically extremely oxidising, with extremely strong oxidising agents needed to produce it, such as fluorine or xenon difluoride. The Br–O bond in is fairly weak, which corresponds to the general reluctance of the 4p elements arsenic, selenium, and bromine to attain their group oxidation state, as they come after the scandide contraction characterised by the poor shielding afforded by the radial-nodeless 3d orbitals.
Organobromine compounds
Like the other carbon–halogen bonds, the C–Br bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the bromide anion. Due to the difference of electronegativity between bromine (2.96) and carbon (2.55), the carbon atom in a C–Br bond is electron-deficient and thus electrophilic. The reactivity of organobromine compounds resembles but is intermediate between the reactivity of organochlorine and organoiodine compounds. For many applications, organobromides represent a compromise of reactivity and cost.
Organobromides are typically produced by additive or substitutive bromination of other organic precursors. Bromine itself can be used, but due to its toxicity and volatility, safer brominating reagents are normally used, such as N-bromosuccinimide. The principal reactions for organobromides include dehydrobromination, Grignard reactions, reductive coupling, and nucleophilic substitution.
Organobromides are the most common organohalides in nature, even though the concentration of bromide is only 0.3% of that for chloride in sea water, because of the easy oxidation of bromide to the equivalent of Br, a potent electrophile. The enzyme bromoperoxidase catalyzes this reaction. The oceans are estimated to release 1–2 million tons of bromoform and 56,000 tons of bromomethane annually.
An old qualitative test for the presence of the alkene functional group is that alkenes turn brown aqueous bromine solutions colourless, forming a bromohydrin with some of the dibromoalkane also produced. The reaction passes through a short-lived strongly electrophilic bromonium intermediate. This is an example of a halogen addition reaction.
Occurrence and production
Bromine is significantly less abundant in the crust than fluorine or chlorine, comprising only 2.5 parts per million of the Earth's crustal rocks, and then only as bromide salts. It is the forty-sixth most abundant element in Earth's crust. It is significantly more abundant in the oceans, resulting from long-term leaching. There, it makes up 65 parts per million, corresponding to a ratio of about one bromine atom for every 660 chlorine atoms. Salt lakes and brine wells may have higher bromine concentrations: for example, the Dead Sea contains 0.4% bromide ions. It is from these sources that bromine extraction is mostly economically feasible.
The main sources of bromine production are Israel and Jordan. The element is liberated by halogen exchange, using chlorine gas to oxidise Br to Br. This is then removed with a blast of steam or air, and is then condensed and purified. Today, bromine is transported in large-capacity metal drums or lead-lined tanks that can hold hundreds of kilograms or even tonnes of bromine. The bromine industry is about one-hundredth the size of the chlorine industry. Laboratory production is unnecessary because bromine is commercially available and has a long shelf life.
Applications
A wide variety of organobromine compounds are used in industry. Some are prepared from bromine and others are prepared from hydrogen bromide, which is obtained by burning hydrogen in bromine.
Flame retardants
Brominated flame retardants represent a commodity of growing importance, and make up the largest commercial use of bromine. When the brominated material burns, the flame retardant produces hydrobromic acid which interferes in the radical chain reaction of the oxidation reaction of the fire. The mechanism is that the highly reactive hydrogen radicals, oxygen radicals, and hydroxy radicals react with hydrobromic acid to form less reactive bromine radicals (i.e., free bromine atoms). Bromine atoms may also react directly with other radicals to help terminate the free radical chain-reactions that characterise combustion.
To make brominated polymers and plastics, bromine-containing compounds can be incorporated into the polymer during polymerisation. One method is to include a relatively small amount of brominated monomer during the polymerisation process. For example, vinyl bromide can be used in the production of polyethylene, polyvinyl chloride or polypropylene. Specific highly brominated molecules can also be added that participate in the polymerisation process For example, tetrabromobisphenol A can be added to polyesters or epoxy resins, where it becomes part of the polymer. Epoxies used in printed circuit boards are normally made from such flame retardant resins, indicated by the FR in the abbreviation of the products (FR-4 and FR-2). In some cases, the bromine-containing compound may be added after polymerisation. For example, decabromodiphenyl ether can be added to the final polymers.
A number of gaseous or highly volatile brominated halomethane compounds are non-toxic and make superior fire suppressant agents by this same mechanism, and are particularly effective in enclosed spaces such as submarines, airplanes, and spacecraft. However, they are expensive and their production and use has been greatly curtailed due to their effect as ozone-depleting agents. They are no longer used in routine fire extinguishers, but retain niche uses in aerospace and military automatic fire suppression applications. They include bromochloromethane (Halon 1011, CHBrCl), bromochlorodifluoromethane (Halon 1211, CBrClF), and bromotrifluoromethane (Halon 1301, CBrF).
Other uses
Silver bromide is used, either alone or in combination with silver chloride and silver iodide, as the light sensitive constituent of photographic emulsions.
Ethylene bromide was an additive in gasolines containing lead anti-engine knocking agents. It scavenges lead by forming volatile lead bromide, which is exhausted from the engine. This application accounted for 77% of the bromine use in 1966 in the US. This application has declined since the 1970s due to environmental regulations (see below).
Brominated vegetable oil (BVO), a complex mixture of plant-derived triglycerides that have been reacted to contain atoms of the element bromine bonded to the molecules, is used primarily to help emulsify citrus-flavored soft drinks, preventing them from separating during distribution.
Poisonous bromomethane was widely used as pesticide to fumigate soil and to fumigate housing, by the tenting method. Ethylene bromide was similarly used. These volatile organobromine compounds are all now regulated as ozone depletion agents. The Montreal Protocol on Substances that Deplete the Ozone Layer scheduled the phase out for the ozone depleting chemical by 2005, and organobromide pesticides are no longer used (in housing fumigation they have been replaced by such compounds as sulfuryl fluoride, which contain neither the chlorine or bromine organics which harm ozone). Before the Montreal protocol in 1991 (for example) an estimated 35,000 tonnes of the chemical were used to control nematodes, fungi, weeds and other soil-borne diseases.
In pharmacology, inorganic bromide compounds, especially potassium bromide, were frequently used as general sedatives in the 19th and early 20th century. Bromides in the form of simple salts are still used as anticonvulsants in both veterinary and human medicine, although the latter use varies from country to country. For example, the U.S. Food and Drug Administration (FDA) does not approve bromide for the treatment of any disease, and it was removed from over-the-counter sedative products like Bromo-Seltzer, in 1975. Commercially available organobromine pharmaceuticals include the vasodilator nicergoline, the sedative brotizolam, the anticancer agent pipobroman, and the antiseptic merbromin. Otherwise, organobromine compounds are rarely pharmaceutically useful, in contrast to the situation for organofluorine compounds. Several drugs are produced as the bromide (or equivalents, hydrobromide) salts, but in such cases bromide serves as an innocuous counterion of no biological significance.
Other uses of organobromine compounds include high-density drilling fluids, dyes (such as Tyrian purple and the indicator bromothymol blue), and pharmaceuticals. Bromine itself, as well as some of its compounds, are used in water treatment, and is the precursor of a variety of inorganic compounds with an enormous number of applications (e.g. silver bromide for photography). Zinc–bromine batteries are hybrid flow batteries used for stationary electrical power backup and storage; from household scale to industrial scale.
Bromine is used in cooling towers (in place of chlorine) for controlling bacteria, algae, fungi, and zebra mussels.
Because it has similar antiseptic qualities to chlorine, bromine can be used in the same manner as chlorine as a disinfectant or antimicrobial in applications such as swimming pools. However, bromine is usually not used outside for these applications due to it being relatively more expensive than chlorine and the absence of a stabilizer to protect it from the sun. For indoor pools, it can be a good option as it is effective at a wider pH range. It is also more stable in a heated pool or hot tub.
Biological role and toxicity
A 2014 study suggests that bromine (in the form of bromide ion) is a necessary cofactor in the biosynthesis of collagen IV, making the element essential to basement membrane architecture and tissue development in animals. Nevertheless, no clear deprivation symptoms or syndromes have been documented in mammals. In other biological functions, bromine may be non-essential but still beneficial when it takes the place of chlorine. For example, in the presence of hydrogen peroxide, HO, formed by the eosinophil, and either chloride or bromide ions, eosinophil peroxidase provides a potent mechanism by which eosinophils kill multicellular parasites (such as the nematode worms involved in filariasis) and some bacteria (such as tuberculosis bacteria). Eosinophil peroxidase is a haloperoxidase that preferentially uses bromide over chloride for this purpose, generating hypobromite (hypobromous acid), although the use of chloride is possible.
α-Haloesters are generally thought of as highly reactive and consequently toxic intermediates in organic synthesis. Nevertheless, mammals, including humans, cats, and rats, appear to biosynthesize traces of an α-bromoester, 2-octyl 4-bromo-3-oxobutanoate, which is found in their cerebrospinal fluid and appears to play a yet unclarified role in inducing REM sleep. Neutrophil myeloperoxidase can use HO and Br to brominate deoxycytidine, which could result in DNA mutations. Marine organisms are the main source of organobromine compounds, and it is in these organisms that bromine is more firmly shown to be essential. More than 1600 such organobromine compounds were identified by 1999. The most abundant is methyl bromide (CHBr), of which an estimated 56,000 tonnes is produced by marine algae each year. The essential oil of the Hawaiian alga Asparagopsis taxiformis consists of 80% bromoform. Most of such organobromine compounds in the sea are made by the action of a unique algal enzyme, vanadium bromoperoxidase.
The bromide anion is not very toxic: a normal daily intake is 2 to 8 milligrams. However, high levels of bromide chronically impair the membrane of neurons, which progressively impairs neuronal transmission, leading to toxicity, known as bromism. Bromide has an elimination half-life of 9 to 12 days, which can lead to excessive accumulation. Doses of 0.5 to 1 gram per day of bromide can lead to bromism. Historically, the therapeutic dose of bromide is about 3 to 5 grams of bromide, thus explaining why chronic toxicity (bromism) was once so common. While significant and sometimes serious disturbances occur to neurologic, psychiatric, dermatological, and gastrointestinal functions, death from bromism is rare. Bromism is caused by a neurotoxic effect on the brain which results in somnolence, psychosis, seizures and delirium.
Elemental bromine is toxic and causes chemical burns on human flesh. Inhaling bromine gas results in similar irritation of the respiratory tract, causing coughing, choking, shortness of breath, and death if inhaled in large enough amounts. Chronic exposure may lead to frequent bronchial infections and a general deterioration of health. As a strong oxidising agent, bromine is incompatible with most organic and inorganic compounds. Caution is required when transporting bromine; it is commonly carried in steel tanks lined with lead, supported by strong metal frames. The Occupational Safety and Health Administration (OSHA) of the United States has set a permissible exposure limit (PEL) for bromine at a time-weighted average (TWA) of 0.1 ppm. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 0.1 ppm and a short-term limit of 0.3 ppm. The exposure to bromine immediately dangerous to life and health (IDLH) is 3 ppm. Bromine is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
References
General and cited references
Chemical elements
Diatomic nonmetals
Gases with color
Halogens
Oxidizing agents
Reactive nonmetals
|
https://en.wikipedia.org/wiki/Barium
|
Barium is a chemical element with the symbol Ba and atomic number 56. It is the fifth element in group 2 and is a soft, silvery alkaline earth metal. Because of its high chemical reactivity, barium is never found in nature as a free element.
The most common minerals of barium are baryte (barium sulfate, BaSO4) and witherite (barium carbonate, BaCO3). The name barium originates from the alchemical derivative "baryta", from Greek (), meaning 'heavy'. Baric is the adjectival form of barium. Barium was identified as a new element in 1772, but not reduced to a metal until 1808 with the advent of electrolysis.
Barium has few industrial applications. Historically, it was used as a getter for vacuum tubes and in oxide form as the emissive coating on indirectly heated cathodes. It is a component of YBCO (high-temperature superconductors) and electroceramics, and is added to steel and cast iron to reduce the size of carbon grains within the microstructure. Barium compounds are added to fireworks to impart a green color. Barium sulfate is used as an insoluble additive to oil well drilling fluid. In a purer form it is used as X-ray radiocontrast agents for imaging the human gastrointestinal tract. Water-soluble barium compounds are poisonous and have been used as rodenticides.
Characteristics
Physical properties
Barium is a soft, silvery-white metal, with a slight golden shade when ultrapure. The silvery-white color of barium metal rapidly vanishes upon oxidation in air yielding a dark gray layer containing the oxide. Barium has a medium specific weight and high electrical conductivity. Because barium is difficult to purify, many of its properties have not been accurately determined.
At room temperature and pressure, barium metal adopts a body-centered cubic structure, with a barium–barium distance of 503 picometers, expanding with heating at a rate of approximately 1.8/°C. It is a very soft metal with a Mohs hardness of 1.25. Its melting temperature of is intermediate between those of the lighter strontium () and heavier radium (); however, its boiling point of exceeds that of strontium (). The density (3.62 g/cm3) is again intermediate between those of strontium (2.36 g/cm3) and radium (≈5 g/cm3).
Chemical reactivity
Barium is chemically similar to magnesium, calcium, and strontium, but even more reactive. It is usually found in the +2 oxidation state. Most exceptions are in a few rare and unstable molecular species that are only characterised in the gas phase such as BaF, but in 2018 a barium(I) species was reported in a graphite intercalation compound. Reactions with chalcogens are highly exothermic (release energy); the reaction with oxygen or air occurs at room temperature. For this reason, metallic barium is often stored under oil or in an inert atmosphere. Reactions with other nonmetals, such as carbon, nitrogen, phosphorus, silicon, and hydrogen, are generally exothermic and proceed upon heating. Reactions with water and alcohols are very exothermic and release hydrogen gas:
Ba + 2 ROH → Ba(OR)2 + H2↑ (R is an alkyl group or a hydrogen atom)
Barium reacts with ammonia to form complexes such as Ba(NH3)6.
The metal is readily attacked by acids. Sulfuric acid is a notable exception because passivation stops the reaction by forming the insoluble barium sulfate on the surface. Barium combines with several other metals, including aluminium, zinc, lead, and tin, forming intermetallic phases and alloys.
Compounds
Barium salts are typically white when solid and colorless when dissolved. They are denser than the strontium or calcium analogs, except for the halides (see table; zinc is given for comparison).
Barium hydroxide ("baryta") was known to alchemists, who produced it by heating barium carbonate. Unlike calcium hydroxide, it absorbs very little CO2 in aqueous solutions and is therefore insensitive to atmospheric fluctuations. This property is used in calibrating pH equipment.
Volatile barium compounds burn with a green to pale green flame, which is an efficient test to detect a barium compound. The color results from spectral lines at 455.4, 493.4, 553.6, and 611.1 nm.
Organobarium compounds are a growing field of knowledge: recently discovered are dialkylbariums and alkylhalobariums.
Isotopes
Barium found in the Earth's crust is a mixture of seven primordial nuclides, barium-130, 132, and 134 through 138. Barium-130 undergoes very slow radioactive decay to xenon-130 by double beta plus decay, with a half-life of (0.5–2.7)×1021 years (about 1011 times the age of the universe). Its abundance is ≈0.1% that of natural barium. Theoretically, barium-132 can similarly undergo double beta decay to xenon-132; this decay has not been detected. The radioactivity of these isotopes is so weak that they pose no danger to life.
Of the stable isotopes, barium-138 composes 71.7% of all barium; other isotopes have decreasing abundance with decreasing mass number.
In total, barium has 40 known isotopes, ranging in mass between 114 and 153. The most stable artificial radioisotope is barium-133 with a half-life of approximately 10.51 years. Five other isotopes have half-lives longer than a day. Barium also has 10 meta states, of which barium-133m1 is the most stable with a half-life of about 39 hours.
History
Alchemists in the early Middle Ages knew about some barium minerals. Smooth pebble-like stones of mineral baryte were found in volcanic rock near Bologna, Italy, and so were called "Bologna stones". Alchemists were attracted to them because after exposure to light they would glow for years. The phosphorescent properties of baryte heated with organics were described by V. Casciorolus in 1602.
Carl Scheele determined that baryte contained a new element in 1772, but could not isolate barium, only barium oxide. Johan Gottlieb Gahn also isolated barium oxide two years later in similar studies. Oxidized barium was at first called "barote" by Guyton de Morveau, a name that was changed by Antoine Lavoisier to baryte (in French) or baryta (in Latin). Also in the 18th century, English mineralogist William Withering noted a heavy mineral in the lead mines of Cumberland, now known to be witherite. Barium was first isolated by electrolysis of molten barium salts in 1808 by Sir Humphry Davy in England. Davy, by analogy with calcium, named "barium" after baryta, with the "-ium" ending signifying a metallic element. Robert Bunsen and Augustus Matthiessen obtained pure barium by electrolysis of a molten mixture of barium chloride and ammonium chloride.
The production of pure oxygen in the Brin process was a large-scale application of barium peroxide in the 1880s, before it was replaced by electrolysis and fractional distillation of liquefied air in the early 1900s. In this process barium oxide reacts at with air to form barium peroxide, which decomposes above by releasing oxygen:
2 BaO + O2 ⇌ 2 BaO2
Barium sulfate was first applied as a radiocontrast agent in X-ray imaging of the digestive system in 1908.
Occurrence and production
The abundance of barium is 0.0425% in the Earth's crust and 13 μg/L in sea water. The primary commercial source of barium is baryte (also called barytes or heavy spar), a barium sulfate mineral. with deposits in many parts of the world. Another commercial source, far less important than baryte, is witherite, barium carbonate. The main deposits are located in Britain, Romania, and the former USSR.
The baryte reserves are estimated between 0.7 and 2 billion tonnes. The maximum production, 8.3 million tonnes, was produced in 1981, but only 7–8% was used for barium metal or compounds. Baryte production has risen since the second half of the 1990s from 5.6 million tonnes in 1996 to 7.6 in 2005 and 7.8 in 2011. China accounts for more than 50% of this output, followed by India (14% in 2011), Morocco (8.3%), US (8.2%), Turkey (2.5%), Iran and Kazakhstan (2.6% each).
The mined ore is washed, crushed, classified, and separated from quartz. If the quartz penetrates too deeply into the ore, or the iron, zinc, or lead content is abnormally high, then froth flotation is used. The product is a 98% pure baryte (by mass); the purity should be no less than 95%, with a minimal content of iron and silicon dioxide. It is then reduced by carbon to barium sulfide:
BaSO4 + 2 C → BaS + 2 CO2
The water-soluble barium sulfide is the starting point for other compounds: treating BaS with oxygen produces the sulfate, with nitric acid the nitrate, with aqueous carbon dioxide the carbonate, and so on. The nitrate can be thermally decomposed to yield the oxide. Barium metal is produced by reduction with aluminium at . The intermetallic compound BaAl4 is produced first:
3 BaO + 14 Al → 3 BaAl4 + Al2O3
BaAl4 is an intermediate reacted with barium oxide to produce the metal. Note that not all barium is reduced.
8 BaO + BaAl4 → Ba↓ + 7 BaAl2O4
The remaining barium oxide reacts with the formed aluminium oxide:
BaO + Al2O3 → BaAl2O4
and the overall reaction is
4 BaO + 2 Al → 3 Ba↓ + BaAl2O4
Barium vapor is condensed and packed into molds in an atmosphere of argon. This method is used commercially, yielding ultrapure barium. Commonly sold barium is about 99% pure, with main impurities being strontium and calcium (up to 0.8% and 0.25%) and other contaminants contributing less than 0.1%.
A similar reaction with silicon at yields barium and barium metasilicate. Electrolysis is not used because barium readily dissolves in molten halides and the product is rather impure.
Gemstone
The barium mineral, benitoite (barium titanium silicate), occurs as a very rare blue fluorescent gemstone, and is the official state gem of California.
Barium in seawater
Barium exists in seawater as the Ba2+ ion with an average oceanic concentration of 109 nmol/kg. Barium also exists in the ocean as BaSO4, or barite. Barium has a nutrient-like profile with a residence time of 10,000 years.
Barium shows a relatively consistent concentration in upper ocean seawater, excepting regions of high river inputs and regions with strong upwelling. There is little depletion of barium concentrations in the upper ocean for an ion with a nutrient-like profile, thus lateral mixing is important. Barium isotopic values show basin-scale balances instead of local or short-term processes.
Applications
Metal and alloys
Barium, as a metal or when alloyed with aluminium, is used to remove unwanted gases (gettering) from vacuum tubes, such as TV picture tubes. Barium is suitable for this purpose because of its low vapor pressure and reactivity towards oxygen, nitrogen, carbon dioxide, and water; it can even partly remove noble gases by dissolving them in the crystal lattice. This application is gradually disappearing due to the rising popularity of the tubeless LCD, LED, and plasma sets.
Other uses of elemental barium are minor and include an additive to silumin (aluminium–silicon alloys) that refines their structure, as well as
bearing alloys;
lead–tin soldering alloys – to increase the creep resistance;
alloy with nickel for spark plugs;
additive to steel and cast iron as an inoculant;
alloys with calcium, manganese, silicon, and aluminium as high-grade steel deoxidizers.
Barium sulfate and baryte
Barium sulfate (the mineral baryte, BaSO4) is important to the petroleum industry as a drilling fluid in oil and gas wells. The precipitate of the compound (called "blanc fixe", from the French for "permanent white") is used in paints and varnishes; as a filler in ringing ink, plastics, and rubbers; as a paper coating pigment; and in nanoparticles, to improve physical properties of some polymers, such as epoxies.
Barium sulfate has a low toxicity and relatively high density of ca. 4.5 g/cm3 (and thus opacity to X-rays). For this reason it is used as a radiocontrast agent in X-ray imaging of the digestive system ("barium meals" and "barium enemas"). Lithopone, a pigment that contains barium sulfate and zinc sulfide, is a permanent white with good covering power that does not darken when exposed to sulfides.
Other barium compounds
Other compounds of barium find only niche applications, limited by the toxicity of Ba2+ ions (barium carbonate is a rat poison), which is not a problem for the insoluble BaSO4.
Barium oxide coating on the electrodes of fluorescent lamps facilitates the release of electrons.
By its great atomic density, barium carbonate increases the refractive index and luster of glass and reduces leaks of X-rays from cathode ray tubes (CRT) TV sets.
Barium, typically as barium nitrate imparts a yellow or "apple" green color to fireworks; for brilliant green barium monochloride is used.
Barium peroxide is a catalyst in the aluminothermic reaction (thermite) for welding rail tracks. It is also a green flare in tracer ammunition and a bleaching agent.
Barium titanate is a promising electroceramic.
Barium fluoride is used for optics in infrared applications because of its wide transparency range of 0.15–12 micrometers.
YBCO was the first high-temperature superconductor cooled by liquid nitrogen, with a transition temperature of that exceeded the boiling point of nitrogen ().
Ferrite, a type of sintered ceramic composed of iron oxide (Fe2O3) and barium oxide (BaO), is both electrically nonconductive and ferrimagnetic, and can be temporarily or permanently magnetized.
Palaeoceanography
The lateral mixing of barium is caused by water mass mixing and ocean circulation. Global ocean circulation reveals a strong correlation between dissolved barium and silicic acid. The large-scale ocean circulation combined with remineralization of barium show a similar correlation between dissolved barium and ocean alkalinity.
Dissolved barium's correlation with silicic acid can be seen both vertically and spatially. Particulate barium shows a strong correlation with particulate organic carbon or POC. Barium is becoming more popular to be used a base for palaeoceanographic proxies. With both dissolved and particulate barium's links with silicic acid and POC, it can be used to determine historical variations in the biological pump, carbon cycle, and global climate.
The barium particulate barite (BaSO4), as one of many proxies, can be used to provide a host of historical information on processes in different oceanic settings (water column, sediments, and hydrothermal sites). In each setting there are differences in isotopic and elemental composition of the barite particulate. Barite in the water column, known as marine or pelagic barite, reveals information on seawater chemistry variation over time. Barite in sediments, known as diagenetic or cold seeps barite, gives information about sedimentary redox processes. Barite formed via hydrothermal activity at hydrothermal vents, known as hydrothermal barite, reveals alterations in the condition of the earth's crust around those vents.
Toxicity
Because of the high reactivity of the metal, toxicological data are available only for compounds. Soluble barium compounds are poisonous. In low doses, barium ions act as a muscle stimulant, and higher doses affect the nervous system, causing cardiac irregularities, tremors, weakness, anxiety, shortness of breath, and paralysis. This toxicity may be caused by Ba2+ blocking potassium ion channels, which are critical to the proper function of the nervous system. Other organs damaged by water-soluble barium compounds (i.e., barium ions) are the eyes, immune system, heart, respiratory system, and skin causing, for example, blindness and sensitization.
Barium is not carcinogenic and does not bioaccumulate. Inhaled dust containing insoluble barium compounds can accumulate in the lungs, causing a benign condition called baritosis. The insoluble sulfate is nontoxic and is not classified as a dangerous goods in transport regulations.
To avoid a potentially vigorous chemical reaction, barium metal is kept in an argon atmosphere or under mineral oils. Contact with air is dangerous and may cause ignition. Moisture, friction, heat, sparks, flames, shocks, static electricity, and exposure to oxidizers and acids should be avoided. Anything that may contact with barium should be electrically grounded.
See also
Han purple and Han blue – synthetic barium copper silicate pigments developed and used in ancient and imperial China
References
External links
Barium at The Periodic Table of Videos (University of Nottingham)
Elementymology & Elements Multidict
3-D Holographic Display Using Strontium Barium Niobate
Chemical elements
Alkaline earth metals
Toxicology
Reducing agents
Chemical elements with body-centered cubic structure
|
https://en.wikipedia.org/wiki/Berkelium
|
Berkelium is a transuranic radioactive chemical element with the symbol Bk and atomic number 97. It is a member of the actinide and transuranium element series. It is named after the city of Berkeley, California, the location of the Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) where it was discovered in December 1949. Berkelium was the fifth transuranium element discovered after neptunium, plutonium, curium and americium.
The major isotope of berkelium, 249Bk, is synthesized in minute quantities in dedicated high-flux nuclear reactors, mainly at the Oak Ridge National Laboratory in Tennessee, United States, and at the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. The longest-lived and second-most important isotope, 247Bk, can be synthesized via irradiation of 244Cm with high-energy alpha particles.
Just over one gram of berkelium has been produced in the United States since 1967. There is no practical application of berkelium outside scientific research which is mostly directed at the synthesis of heavier transuranium elements and superheavy elements. A 22-milligram batch of berkelium-249 was prepared during a 250-day irradiation period and then purified for a further 90 days at Oak Ridge in 2009. This sample was used to synthesize the new element tennessine for the first time in 2009 at the Joint Institute for Nuclear Research, Russia, after it was bombarded with calcium-48 ions for 150 days. This was the culmination of the Russia–US collaboration on the synthesis of the heaviest elements on the periodic table.
Berkelium is a soft, silvery-white, radioactive metal. The berkelium-249 isotope emits low-energy electrons and thus is relatively safe to handle. It decays with a half-life of 330 days to californium-249, which is a strong emitter of ionizing alpha particles. This gradual transformation is an important consideration when studying the properties of elemental berkelium and its chemical compounds, since the formation of californium brings not only chemical contamination, but also free-radical effects and self-heating from the emitted alpha particles.
Characteristics
Physical
Berkelium is a soft, silvery-white, radioactive actinide metal. In the periodic table, it is located to the right of the actinide curium, to the left of the actinide californium and below the lanthanide terbium with which it shares many similarities in physical and chemical properties. Its density of 14.78 g/cm3 lies between those of curium (13.52 g/cm3) and californium (15.1 g/cm3), as does its melting point of 986 °C, below that of curium (1340 °C) but higher than that of californium (900 °C). Berkelium is relatively soft and has one of the lowest bulk moduli among the actinides, at about 20 GPa (2 Pa).
ions shows two sharp fluorescence peaks at 652 nanometers (red light) and 742 nanometers (deep red – near-infrared) due to internal transitions at the f-electron shell. The relative intensity of these peaks depends on the excitation power and temperature of the sample. This emission can be observed, for example, after dispersing berkelium ions in a silicate glass, by melting the glass in presence of berkelium oxide or halide.
Between 70 K and room temperature, berkelium behaves as a Curie–Weiss paramagnetic material with an effective magnetic moment of 9.69 Bohr magnetons (µB) and a Curie temperature of 101 K. This magnetic moment is almost equal to the theoretical value of 9.72 µB calculated within the simple atomic L-S coupling model. Upon cooling to about 34 K, berkelium undergoes a transition to an antiferromagnetic state. Enthalpy of dissolution in hydrochloric acid at standard conditions is −600 kJ/mol, from which the standard enthalpy of formation (ΔfH°) of aqueous ions is obtained as −601 kJ/mol. The standard electrode potential /Bk is −2.01 V. The ionization potential of a neutral berkelium atom is 6.23 eV.
Allotropes
At ambient conditions, berkelium assumes its most stable α form which has a hexagonal symmetry, space group P63/mmc, lattice parameters of 341 pm and 1107 pm. The crystal has a double-hexagonal close packing structure with the layer sequence ABAC and so is isotypic (having a similar structure) with α-lanthanum and α-forms of actinides beyond curium. This crystal structure changes with pressure and temperature. When compressed at room temperature to 7 GPa, α-berkelium transforms to the β modification, which has a face-centered cubic (fcc) symmetry and space group Fmm. This transition occurs without change in volume, but the enthalpy increases by 3.66 kJ/mol. Upon further compression to 25 GPa, berkelium transforms to an orthorhombic γ-berkelium structure similar to that of α-uranium. This transition is accompanied by a 12% volume decrease and delocalization of the electrons at the 5f electron shell. No further phase transitions are observed up to 57 GPa.
Upon heating, α-berkelium transforms into another phase with an fcc lattice (but slightly different from β-berkelium), space group Fmm and the lattice constant of 500 pm; this fcc structure is equivalent to the closest packing with the sequence ABC. This phase is metastable and will gradually revert to the original α-berkelium phase at room temperature. The temperature of the phase transition is believed to be quite close to the melting point.
Chemical
Like all actinides, berkelium dissolves in various aqueous inorganic acids, liberating gaseous hydrogen and converting into the state. This trivalent oxidation state (+3) is the most stable, especially in aqueous solutions, but tetravalent (+4), pentavalent (+5), and possibly divalent (+2) berkelium compounds are also known. The existence of divalent berkelium salts is uncertain and has only been reported in mixed lanthanum(III) chloride-strontium chloride melts. A similar behavior is observed for the lanthanide analogue of berkelium, terbium. Aqueous solutions of ions are green in most acids. The color of ions is yellow in hydrochloric acid and orange-yellow in sulfuric acid. Berkelium does not react rapidly with oxygen at room temperature, possibly due to the formation of a protective oxide layer surface. However, it reacts with molten metals, hydrogen, halogens, chalcogens and pnictogens to form various binary compounds.
Isotopes
Nineteen isotopes and six nuclear isomers (excited states of an isotope) of berkelium have been characterized, with mass numbers ranging from 233 to 253 (except 235 and 237). All of them are radioactive. The longest half-lives are observed for 247Bk (1,380 years), 248Bk (over 300 years), and 249Bk (330 days); the half-lives of the other isotopes range from microseconds to several days. The isotope which is the easiest to synthesize is berkelium-249. This emits mostly soft β-particles which are inconvenient for detection. Its alpha radiation is rather weak (1.45%) with respect to the β-radiation, but is sometimes used to detect this isotope. The second important berkelium isotope, berkelium-247, is an alpha-emitter, as are most actinide isotopes.
Occurrence
All berkelium isotopes have a half-life far too short to be primordial. Therefore, any primordial berkelium − that is, berkelium present on the Earth during its formation − has decayed by now.
On Earth, berkelium is mostly concentrated in certain areas, which were used for the atmospheric nuclear weapons tests between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster, Three Mile Island accident and 1968 Thule Air Base B-52 crash. Analysis of the debris at the testing site of the first United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides, including berkelium. For reasons of military secrecy, this result was not published until 1956.
Nuclear reactors produce mostly, among the berkelium isotopes, berkelium-249. During the storage and before the fuel disposal, most of it beta decays to californium-249. The latter has a half-life of 351 years, which is relatively long compared to the half-lives of other isotopes produced in the reactor, and is therefore undesirable in the disposal products.
The transuranium elements from americium to fermium, including berkelium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Berkelium is also one of the elements that have theoretically been detected in Przybylski's Star.
History
Although very small amounts of berkelium were possibly produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in December 1949 by Glenn T. Seaborg, Albert Ghiorso, Stanley Gerald Thompson, and Kenneth Street Jr. They used the 60-inch cyclotron at the University of California, Berkeley. Similar to the nearly simultaneous discovery of americium (element 95) and curium (element 96) in 1944, the new elements berkelium and californium (element 98) were both produced in 1949–1950.
The name choice for element 97 followed the previous tradition of the Californian group to draw an analogy between the newly discovered actinide and the lanthanide element positioned above it in the periodic table. Previously, americium was named after a continent as its analogue europium, and curium honored scientists Marie and Pierre Curie as the lanthanide above it, gadolinium, was named after the explorer of the rare-earth elements Johan Gadolin. Thus the discovery report by the Berkeley group reads: "It is suggested that element 97 be given the name berkelium (symbol Bk) after the city of Berkeley in a manner similar to that used in naming its chemical homologue terbium (atomic number 65) whose name was derived from the town of Ytterby, Sweden, where the rare earth minerals were first found." This tradition ended with berkelium, though, as the naming of the next discovered actinide, californium, was not related to its lanthanide analogue dysprosium, but after the discovery place.
The most difficult steps in the synthesis of berkelium were its separation from the final products and the production of sufficient quantities of americium for the target material. First, americium (241Am) nitrate solution was coated on a platinum foil, the solution was evaporated and the residue converted by annealing to americium dioxide (). This target was irradiated with 35 MeV alpha particles for 6 hours in the 60-inch cyclotron at the Lawrence Radiation Laboratory, University of California, Berkeley. The (α,2n) reaction induced by the irradiation yielded the 243Bk isotope and two free neutrons:
+ → + 2
After the irradiation, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The product was centrifugated and re-dissolved in nitric acid. To separate berkelium from the unreacted americium, this solution was added to a mixture of ammonium and ammonium sulfate and heated to convert all the dissolved americium into the oxidation state +6. Unoxidized residual americium was precipitated by the addition of hydrofluoric acid as americium(III) fluoride (). This step yielded a mixture of the accompanying product curium and the expected element 97 in form of trifluorides. The mixture was converted to the corresponding hydroxides by treating it with potassium hydroxide, and after centrifugation, was dissolved in perchloric acid.
Further separation was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH≈3.5), using ion exchange at elevated temperature. The chromatographic separation behavior was unknown for the element 97 at the time, but was anticipated by analogy with terbium. The first results were disappointing because no alpha-particle emission signature could be detected from the elution product. With further analysis, searching for characteristic X-rays and conversion electron signals, a berkelium isotope was eventually detected. Its mass number was uncertain between 243 and 244 in the initial report, but was later established as 243.
Synthesis and extraction
Preparation of isotopes
Berkelium is produced by bombarding lighter actinides uranium (238U) or plutonium (239Pu) with neutrons in a nuclear reactor. In a more common case of uranium fuel, plutonium is produced first by neutron capture (the so-called (n,γ) reaction or neutron fusion) followed by beta-decay:
^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu (the times are half-lives)
Plutonium-239 is further irradiated by a source that has a high neutron flux, several times higher than a conventional nuclear reactor, such as the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, USA. The higher flux promotes fusion reactions involving not one but several neutrons, converting 239Pu to 244Cm and then to 249Cm:
Curium-249 has a short half-life of 64 minutes, and thus its further conversion to 250Cm has a low probability. Instead, it transforms by beta-decay into 249Bk:
^{249}_{96}Cm ->[{\beta^-}][64.15 \ \ce{min}] ^{249}_{97}Bk ->[\beta^-][330 \ \ce{d}] ^{249}_{98}Cf
The thus-produced 249Bk has a long half-life of 330 days and thus can capture another neutron. However, the product, 250Bk, again has a relatively short half-life of 3.212 hours and thus does not yield any heavier berkelium isotopes. It instead decays to the californium isotope 250Cf:
^{249}_{97}Bk ->[\ce{(n,\gamma)}] ^{250}_{97}Bk ->[\beta^-][3.212 \ \ce{h}] ^{250}_{98}Cf
Although 247Bk is the most stable isotope of berkelium, its production in nuclear reactors is very difficult because its potential progenitor 247Cm has never been observed to undergo beta decay. Thus, 249Bk is the most accessible isotope of berkelium, which still is available only in small quantities (only 0.66 grams have been produced in the US over the period 1967–1983) at a high price of the order 185 USD per microgram. It is the only berkelium isotope available in bulk quantities, and thus the only berkelium isotope whose properties can be extensively studied.
The isotope 248Bk was first obtained in 1956 by bombarding a mixture of curium isotopes with 25 MeV α-particles. Although its direct detection was hindered by strong signal interference with 245Bk, the existence of a new isotope was proven by the growth of the decay product 248Cf which had been previously characterized. The half-life of 248Bk was estimated as hours, though later 1965 work gave a half-life in excess of 300 years (which may be due to an isomeric state). Berkelium-247 was produced during the same year by irradiating 244Cm with alpha-particles:
Berkelium-242 was synthesized in 1979 by bombarding 235U with 11B, 238U with 10B, 232Th with 14N or 232Th with 15N. It converts by electron capture to 242Cm with a half-life of minutes. A search for an initially suspected isotope 241Bk was then unsuccessful; 241Bk has since been synthesized.
Separation
The fact that berkelium readily assumes oxidation state +4 in solids, and is relatively stable in this state in liquids greatly assists separation of berkelium away from many other actinides. These are inevitably produced in relatively large amounts during the nuclear synthesis and often favor the +3 state. This fact was not yet known in the initial experiments, which used a more complex separation procedure. Various inorganic oxidation agents can be applied to the solutions to convert it to the +4 state, such as bromates (), bismuthates (), chromates ( and ), silver(I) thiolate (), lead(IV) oxide (), ozone (), or photochemical oxidation procedures. More recently, it has been discovered that some organic and bio-inspired molecules, such as the chelator called 3,4,3-LI(1,2-HOPO), can also oxidize Bk(III) and stabilize Bk(IV) under mild conditions. is then extracted with ion exchange, extraction chromatography or liquid-liquid extraction using HDEHP (bis-(2-ethylhexyl) phosphoric acid), amines, tributyl phosphate or various other reagents. These procedures separate berkelium from most trivalent actinides and lanthanides, except for the lanthanide cerium (lanthanides are absent in the irradiation target but are created in various nuclear fission decay chains).
A more detailed procedure adopted at the Oak Ridge National Laboratory was as follows: the initial mixture of actinides is processed with ion exchange using lithium chloride reagent, then precipitated as hydroxides, filtered and dissolved in nitric acid. It is then treated with high-pressure elution from cation exchange resins, and the berkelium phase is oxidized and extracted using one of the procedures described above. Reduction of the thus-obtained to the +3 oxidation state yields a solution, which is nearly free from other actinides (but contains cerium). Berkelium and cerium are then separated with another round of ion-exchange treatment.
Bulk metal preparation
In order to characterize chemical and physical properties of solid berkelium and its compounds, a program was initiated in 1952 at the Material Testing Reactor, Arco, Idaho, US. It resulted in preparation of an eight-gram plutonium-239 target and in the first production of macroscopic quantities (0.6 micrograms) of berkelium by Burris B. Cunningham and Stanley Gerald Thompson in 1958, after a continuous reactor irradiation of this target for six years. This irradiation method was and still is the only way of producing weighable amounts of the element, and most solid-state studies of berkelium have been conducted on microgram or submicrogram-sized samples.
The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor at the Oak Ridge National Laboratory in Tennessee, USA, and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium elements (atomic number greater than 96). These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not publicly reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium-249 and einsteinium, and picogram quantities of fermium. In total, just over one gram of berkelium-249 has been produced at Oak Ridge since 1967.
The first berkelium metal sample weighing 1.7 micrograms was prepared in 1971 by the reduction of fluoride with lithium vapor at 1000 °C; the fluoride was suspended on a tungsten wire above a tantalum crucible containing molten lithium. Later, metal samples weighing up to 0.5 milligrams were obtained with this method.
Similar results are obtained with fluoride. Berkelium metal can also be produced by the reduction of oxide with thorium or lanthanum.
Compounds
Oxides
Two oxides of berkelium are known, with the berkelium oxidation state of +3 () and +4 (). oxide is a brown solid, while oxide is a yellow-green solid with a melting point of 1920 °C and is formed from BkO2 by reduction with molecular hydrogen:
Upon heating to 1200 °C, the oxide undergoes a phase change; it undergoes another phase change at 1750 °C. Such three-phase behavior is typical for the actinide sesquioxides. oxide, BkO, has been reported as a brittle gray solid but its exact chemical composition remains uncertain.
Halides
In halides, berkelium assumes the oxidation states +3 and +4. The +3 state is the most stable, especially in solutions, while the tetravalent halides and are only known in the solid phase. The coordination of berkelium atom in its trivalent fluoride and chloride is tricapped trigonal prismatic, with the coordination number of 9. In trivalent bromide, it is bicapped trigonal prismatic (coordination 8) or octahedral (coordination 6), and in the iodide it is octahedral.
fluoride () is a yellow-green ionic solid and is isotypic with uranium tetrafluoride or zirconium tetrafluoride. fluoride () is also a yellow-green solid, but it has two crystalline structures. The most stable phase at low temperatures is isotypic with yttrium(III) fluoride, while upon heating to between 350 and 600 °C, it transforms to the structure found in lanthanum trifluoride.
Visible amounts of chloride () were first isolated and characterized in 1962, and weighed only 3 billionths of a gram. It can be prepared by introducing hydrogen chloride vapors into an evacuated quartz tube containing berkelium oxide at a temperature about 500 °C. This green solid has a melting point of 600 °C, and is isotypic with uranium(III) chloride. Upon heating to nearly melting point, converts into an orthorhombic phase.
Two forms of bromide are known: one with berkelium having coordination 6, and one with coordination 8. The latter is less stable and transforms to the former phase upon heating to about 350 °C. An important phenomenon for radioactive solids has been studied on these two crystal forms: the structure of fresh and aged 249BkBr3 samples was probed by X-ray diffraction over a period longer than 3 years, so that various fractions of berkelium-249 had beta decayed to californium-249. No change in structure was observed upon the 249BkBr3—249CfBr3 transformation. However, other differences were noted for 249BkBr3 and 249CfBr3. For example, the latter could be reduced with hydrogen to 249CfBr2, but the former could not – this result was reproduced on individual 249BkBr3 and 249CfBr3 samples, as well on the samples containing both bromides. The intergrowth of californium in berkelium occurs at a rate of 0.22% per day and is an intrinsic obstacle in studying berkelium properties. Beside a chemical contamination, 249Cf, being an alpha emitter, brings undesirable self-damage of the crystal lattice and the resulting self-heating. The chemical effect however can be avoided by performing measurements as a function of time and extrapolating the obtained results.
Other inorganic compounds
The pnictides of berkelium-249 of the type BkX are known for the elements nitrogen, phosphorus, arsenic and antimony. They crystallize in the rock-salt structure and are prepared by the reaction of either hydride () or metallic berkelium with these elements at elevated temperature (about 600 °C) under high vacuum.
sulfide, , is prepared by either treating berkelium oxide with a mixture of hydrogen sulfide and carbon disulfide vapors at 1130 °C, or by directly reacting metallic berkelium with elemental sulfur. These procedures yield brownish-black crystals.
and hydroxides are both stable in 1 molar solutions of sodium hydroxide. phosphate () has been prepared as a solid, which shows strong fluorescence under excitation with a green light. Berkelium hydrides are produced by reacting metal with hydrogen gas at temperatures about 250 °C. They are non-stoichiometric with the nominal formula (0 < x < 1). Several other salts of berkelium are known, including an oxysulfide (), and hydrated nitrate (), chloride (), sulfate () and oxalate (). Thermal decomposition at about 600 °C in an argon atmosphere (to avoid oxidation to ) of yields the crystals of oxysulfate (). This compound is thermally stable to at least 1000 °C in inert atmosphere.
Organoberkelium compounds
Berkelium forms a trigonal (η5–C5H5)3Bk metallocene complex with three cyclopentadienyl rings, which can be synthesized by reacting chloride with the molten beryllocene () at about 70 °C. It has an amber color and a density of 2.47 g/cm3. The complex is stable to heating to at least 250 °C, and sublimates without melting at about 350 °C. The high radioactivity of berkelium gradually destroys the compound (within a period of weeks). One cyclopentadienyl ring in (η5–C5H5)3Bk can be substituted by chlorine to yield . The optical absorption spectra of this compound are very similar to those of (η5–C5H5)3Bk.
Applications
There is currently no use for any isotope of berkelium outside basic scientific research. Berkelium-249 is a common target nuclide to prepare still heavier transuranium elements and superheavy elements, such as lawrencium, rutherfordium and bohrium. It is also useful as a source of the isotope californium-249, which is used for studies on the chemistry of californium in preference to the more radioactive californium-252 that is produced in neutron bombardment facilities such as the HFIR.
A 22 milligram batch of berkelium-249 was prepared in a 250-day irradiation and then purified for 90 days at Oak Ridge in 2009. This target yielded the first 6 atoms of tennessine at the Joint Institute for Nuclear Research (JINR), Dubna, Russia, after bombarding it with calcium ions in the U400 cyclotron for 150 days. This synthesis was a culmination of the Russia-US collaboration between JINR and Lawrence Livermore National Laboratory on the synthesis of elements 113 to 118 which was initiated in 1989.
Nuclear fuel cycle
The nuclear fission properties of berkelium are different from those of the neighboring actinides curium and californium, and they suggest berkelium to perform poorly as a fuel in a nuclear reactor. Specifically, berkelium-249 has a moderately large neutron capture cross section of 710 barns for thermal neutrons, 1200 barns resonance integral, but very low fission cross section for thermal neutrons. In a thermal reactor, much of it will therefore be converted to berkelium-250 which quickly decays to californium-250. In principle, berkelium-249 can sustain a nuclear chain reaction in a fast breeder reactor. Its critical mass is relatively high at 192 kg; it can be reduced with a water or steel reflector but would still exceed the world production of this isotope.
Berkelium-247 can maintain chain reaction both in a thermal-neutron and in a fast-neutron reactor, however, its production is rather complex and thus the availability is much lower than its critical mass, which is about 75.7 kg for a bare sphere, 41.2 kg with a water reflector and 35.2 kg with a steel reflector (30 cm thickness).
Health issues
Little is known about the effects of berkelium on human body, and analogies with other elements may not be drawn because of different radiation products (electrons for berkelium and alpha particles, neutrons, or both for most other actinides). The low energy of electrons emitted from berkelium-249 (less than 126 keV) hinders its detection, due to signal interference with other decay processes, but also makes this isotope relatively harmless to humans as compared to other actinides. However, berkelium-249 transforms with a half-life of only 330 days to the strong alpha-emitter californium-249, which is rather dangerous and has to be handled in a glovebox in a dedicated laboratory.
Most available berkelium toxicity data originate from research on animals. Upon ingestion by rats, only about 0.01% of berkelium ends in the blood stream. From there, about 65% goes to the bones, where it remains for about 50 years, 25% to the lungs (biological half-life about 20 years), 0.035% to the testicles or 0.01% to the ovaries where berkelium stays indefinitely. The balance of about 10% is excreted. In all these organs berkelium might promote cancer, and in the skeleton, its radiation can damage red blood cells. The maximum permissible amount of berkelium-249 in the human skeleton is 0.4 nanograms.
References
Bibliography
External links
Berkelium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Chemical elements with double hexagonal close-packed structure
Actinides
Synthetic elements
|
https://en.wikipedia.org/wiki/Bauhaus
|
The Staatliches Bauhaus (), commonly known as the , was a German art school operational from 1919 to 1933 that combined crafts and the fine arts. The school became famous for its approach to design, which attempted to unify individual artistic vision with the principles of mass production and emphasis on function. Along with the doctrine of functionalism, the Bauhaus initiated the conceptual understanding of architecture and design.
The Bauhaus was founded by architect Walter Gropius in Weimar. It was grounded in the idea of creating a Gesamtkunstwerk ("comprehensive artwork") in which all the arts would eventually be brought together. The Bauhaus style later became one of the most influential currents in modern design, modernist architecture, and architectural education. The Bauhaus movement had a profound influence on subsequent developments in art, architecture, graphic design, interior design, industrial design, and typography. Staff at the Bauhaus included prominent artists such as Paul Klee, Wassily Kandinsky, and László Moholy-Nagy at various points.
The school existed in three German cities—Weimar, from 1919 to 1925; Dessau, from 1925 to 1932; and Berlin, from 1932 to 1933—under three different architect-directors: Walter Gropius from 1919 to 1928; Hannes Meyer from 1928 to 1930; and Ludwig Mies van der Rohe from 1930 until 1933, when the school was closed by its own leadership under pressure from the Nazi regime, having been painted as a centre of communist intellectualism. Internationally, former key figures of Bauhaus were successful in the United States and became known as the avant-garde for the International Style.
The changes of venue and leadership resulted in a constant shifting of focus, technique, instructors, and politics. For example, the pottery shop was discontinued when the school moved from Weimar to Dessau, even though it had been an important revenue source; when Mies van der Rohe took over the school in 1930, he transformed it into a private school and would not allow any supporters of Hannes Meyer to attend it.
Term and concept
Bauhaus is sometimes mistakenly called a style. This is not true. Bauhaus is not a style. However, several specific features are identified in its forms and shapes: simple geometric shapes like rectangles and spheres, without elaborate decorations. Buildings, furniture, and fonts often feature rounded corners and sometimes rounded walls. Other buildings are characterized by rectangular features, for example protruding balconies with flat, chunky railings facing the street, and long banks of windows. Furniture often uses chrome metal pipes that curve at corners. Some outlines can be defined as a tool for creating an ideal form, which is the basis of the architectural concept.
Bauhaus and German modernism
After Germany's defeat in World War I and the establishment of the Weimar Republic, a renewed liberal spirit allowed an upsurge of radical experimentation in all the arts, which had been suppressed by the old regime. Many Germans of left-wing views were influenced by the cultural experimentation that followed the Russian Revolution, such as constructivism. Such influences can be overstated: Gropius did not share these radical views, and said that Bauhaus was entirely apolitical. Just as important was the influence of the 19th-century English designer William Morris (1834–1896), who had argued that art should meet the needs of society and that there should be no distinction between form and function. Thus, the Bauhaus style, also known as the International Style, was marked by the absence of ornamentation and by harmony between the function of an object or a building and its design.
However, the most important influence on Bauhaus was modernism, a cultural movement whose origins lay as early as the 1880s, and which had already made its presence felt in Germany before the World War, despite the prevailing conservatism. The design innovations commonly associated with Gropius and the Bauhaus—the radically simplified forms, the rationality and functionality, and the idea that mass production was reconcilable with the individual artistic spirit—were already partly developed in Germany before the Bauhaus was founded. The German national designers' organization Deutscher Werkbund was formed in 1907 by Hermann Muthesius to harness the new potentials of mass production, with a mind towards preserving Germany's economic competitiveness with England. In its first seven years, the Werkbund came to be regarded as the authoritative body on questions of design in Germany, and was copied in other countries. Many fundamental questions of craftsmanship versus mass production, the relationship of usefulness and beauty, the practical purpose of formal beauty in a commonplace object, and whether or not a single proper form could exist, were argued out among its 1,870 members (by 1914).
German architectural modernism was known as Neues Bauen. Beginning in June 1907, Peter Behrens' pioneering industrial design work for the German electrical company AEG successfully integrated art and mass production on a large scale. He designed consumer products, standardized parts, created clean-lined designs for the company's graphics, developed a consistent corporate identity, built the modernist landmark AEG Turbine Factory, and made full use of newly developed materials such as poured concrete and exposed steel. Behrens was a founding member of the Werkbund, and both Walter Gropius and Adolf Meyer worked for him in this period.
The Bauhaus was founded at a time when the German zeitgeist had turned from emotional Expressionism to the matter-of-fact New Objectivity. An entire group of working architects, including Erich Mendelsohn, Bruno Taut and Hans Poelzig, turned away from fanciful experimentation and towards rational, functional, sometimes standardized building. Beyond the Bauhaus, many other significant German-speaking architects in the 1920s responded to the same aesthetic issues and material possibilities as the school. They also responded to the promise of a "minimal dwelling" written into the new Weimar Constitution. Ernst May, Bruno Taut and Martin Wagner, among others, built large housing blocks in Frankfurt and Berlin. The acceptance of modernist design into everyday life was the subject of publicity campaigns, well-attended public exhibitions like the Weissenhof Estate, films, and sometimes fierce public debate.
Bauhaus and Vkhutemas
The Vkhutemas, the Russian state art and technical school founded in 1920 in Moscow, has been compared to Bauhaus. Founded a year after the Bauhaus school, Vkhutemas has close parallels to the German Bauhaus in its intent, organization and scope. The two schools were the first to train artist-designers in a modern manner. Both schools were state-sponsored initiatives to merge traditional craft with modern technology, with a basic course in aesthetic principles, courses in color theory, industrial design, and architecture. Vkhutemas was a larger school than the Bauhaus, but it was less publicised outside the Soviet Union and consequently, is less familiar in the West.
With the internationalism of modern architecture and design, there were many exchanges between the Vkhutemas and the Bauhaus. The second Bauhaus director Hannes Meyer attempted to organise an exchange between the two schools, while Hinnerk Scheper of the Bauhaus collaborated with various Vkhutein members on the use of colour in architecture. In addition, El Lissitzky's book Russia: an Architecture for World Revolution published in German in 1930 featured several illustrations of Vkhutemas/Vkhutein projects there.
History of the Bauhaus
Weimar
The school was founded by Walter Gropius in Weimar on 1 April 1919, as a merger of the Grand-Ducal Saxon Academy of Fine Art and the Grand Ducal Saxon School of Arts and Crafts for a newly affiliated architecture department. Its roots lay in the arts and crafts school founded by the Grand Duke of Saxe-Weimar-Eisenach in 1906, and directed by Belgian Art Nouveau architect Henry van de Velde. When van de Velde was forced to resign in 1915 because he was Belgian, he suggested Gropius, Hermann Obrist, and August Endell as possible successors. In 1919, after delays caused by World War I and a lengthy debate over who should head the institution and the socio-economic meanings of a reconciliation of the fine arts and the applied arts (an issue which remained a defining one throughout the school's existence), Gropius was made the director of a new institution integrating the two called the Bauhaus. In the pamphlet for an April 1919 exhibition entitled Exhibition of Unknown Architects, Gropius, still very much under the influence of William Morris and the British
Arts and Crafts Movement, proclaimed his goal as being "to create a new guild of craftsmen, without the class distinctions which raise an arrogant barrier between craftsman and artist." Gropius's neologism Bauhaus references both building and the Bauhütte, a premodern guild of stonemasons. The early intention was for the Bauhaus to be a combined architecture school, crafts school, and academy of the arts. Swiss painter Johannes Itten, German-American painter Lyonel Feininger, and German sculptor Gerhard Marcks, along with Gropius, comprised the faculty of the Bauhaus in 1919. By the following year their ranks had grown to include German painter, sculptor, and designer Oskar Schlemmer who headed the theatre workshop, and Swiss painter Paul Klee, joined in 1922 by Russian painter Wassily Kandinsky. A tumultuous year at the Bauhaus, 1922 also saw the move of Dutch painter Theo van Doesburg to Weimar to promote De Stijl ("The Style"), and a visit to the Bauhaus by Russian Constructivist artist and architect El Lissitzky.
From 1919 to 1922 the school was shaped by the pedagogical and aesthetic ideas of Johannes Itten, who taught the Vorkurs or "preliminary course" that was the introduction to the ideas of the Bauhaus. Itten was heavily influenced in his teaching by the ideas of Franz Cižek and Friedrich Wilhelm August Fröbel. He was also influenced in respect to aesthetics by the work of the Der Blaue Reiter group in Munich, as well as the work of Austrian Expressionist Oskar Kokoschka. The influence of German Expressionism favoured by Itten was analogous in some ways to the fine arts side of the ongoing debate. This influence culminated with the addition of Der Blaue Reiter founding member Wassily Kandinsky to the faculty and ended when Itten resigned in late 1923. Itten was replaced by the Hungarian designer László Moholy-Nagy, who rewrote the Vorkurs with a leaning towards the New Objectivity favoured by Gropius, which was analogous in some ways to the applied arts side of the debate. Although this shift was an important one, it did not represent a radical break from the past so much as a small step in a broader, more gradual socio-economic movement that had been going on at least since 1907, when van de Velde had argued for a craft basis for design while Hermann Muthesius had begun implementing industrial prototypes.
Gropius was not necessarily against Expressionism, and in fact, himself in the same 1919 pamphlet proclaiming this "new guild of craftsmen, without the class snobbery", described "painting and sculpture rising to heaven out of the hands of a million craftsmen, the crystal symbol of the new faith of the future." By 1923, however, Gropius was no longer evoking images of soaring Romanesque cathedrals and the craft-driven aesthetic of the "Völkisch movement", instead declaring "we want an architecture adapted to our world of machines, radios and fast cars." Gropius argued that a new period of history had begun with the end of the war. He wanted to create a new architectural style to reflect this new era. His style in architecture and consumer goods was to be functional, cheap and consistent with mass production. To these ends, Gropius wanted to reunite art and craft to arrive at high-end functional products with artistic merit. The Bauhaus issued a magazine called Bauhaus and a series of books called "Bauhausbücher". Since the Weimar Republic lacked the number of raw materials available to the United States and Great Britain, it had to rely on the proficiency of a skilled labour force and an ability to export innovative and high-quality goods. Therefore, designers were needed and so was a new type of art education. The school's philosophy stated that the artist should be trained to work with the industry.
Weimar was in the German state of Thuringia, and the Bauhaus school received state support from the Social Democrat-controlled Thuringian state government. The school in Weimar experienced political pressure from conservative circles in Thuringian politics, increasingly so after 1923 as political tension rose. One condition placed on the Bauhaus in this new political environment was the exhibition of work undertaken at the school. This condition was met in 1923 with the Bauhaus' exhibition of the experimental Haus am Horn. The Ministry of Education placed the staff on six-month contracts and cut the school's funding in half. The Bauhaus issued a press release on 26 December 1924, setting the closure of the school for the end of March 1925. At this point it had already been looking for alternative sources of funding. After the Bauhaus moved to Dessau, a school of industrial design with teachers and staff less antagonistic to the conservative political regime remained in Weimar. This school was eventually known as the Technical University of Architecture and Civil Engineering, and in 1996 changed its name to Bauhaus-University Weimar.
Dessau
The Bauhaus moved to Dessau in 1925 and new facilities there were inaugurated in late 1926. Gropius's design for the Dessau facilities was a return to the futuristic Gropius of 1914 that had more in common with the International style lines of the Fagus Factory than the stripped down Neo-classical of the Werkbund pavilion or the Völkisch Sommerfeld House. During the Dessau years, there was a remarkable change in direction for the school. According to Elaine Hoffman, Gropius had approached the Dutch architect Mart Stam to run the newly founded architecture program, and when Stam declined the position, Gropius turned to Stam's friend and colleague in the ABC group, Hannes Meyer.
Meyer became director when Gropius resigned in February 1928, and brought the Bauhaus its two most significant building commissions, both of which still exist: five apartment buildings in the city of Dessau, and the Bundesschule des Allgemeinen Deutschen Gewerkschaftsbundes (ADGB Trade Union School) in Bernau bei Berlin. Meyer favoured measurements and calculations in his presentations to clients, along with the use of off-the-shelf architectural components to reduce costs. This approach proved attractive to potential clients. The school turned its first profit under his leadership in 1929.
But Meyer also generated a great deal of conflict. As a radical functionalist, he had no patience with the aesthetic program and forced the resignations of Herbert Bayer, Marcel Breuer, and other long-time instructors. Even though Meyer shifted the orientation of the school further to the left than it had been under Gropius, he didn't want the school to become a tool of left-wing party politics. He prevented the formation of a student communist cell, and in the increasingly dangerous political atmosphere, this became a threat to the existence of the Dessau school. Dessau mayor Fritz Hesse fired him in the summer of 1930. The Dessau city council attempted to convince Gropius to return as head of the school, but Gropius instead suggested Ludwig Mies van der Rohe. Mies was appointed in 1930 and immediately interviewed each student, dismissing those that he deemed uncommitted. He halted the school's manufacture of goods so that the school could focus on teaching, and appointed no new faculty other than his close confidant Lilly Reich. By 1931, the Nazi Party was becoming more influential in German politics. When it gained control of the Dessau city council, it moved to close the school.
Berlin
In late 1932, Mies rented a derelict factory in Berlin (Birkbusch Street 49) to use as the new Bauhaus with his own money. The students and faculty rehabilitated the building, painting the interior white. The school operated for ten months without further interference from the Nazi Party. In 1933, the Gestapo closed down the Berlin school. Mies protested the decision, eventually speaking to the head of the Gestapo, who agreed to allow the school to re-open. However, shortly after receiving a letter permitting the opening of the Bauhaus, Mies and the other faculty agreed to voluntarily shut down the school.
Although neither the Nazi Party nor Adolf Hitler had a cohesive architectural policy before they came to power in 1933, Nazi writers like Wilhelm Frick and Alfred Rosenberg had already labelled the Bauhaus "un-German" and criticized its modernist styles, deliberately generating public controversy over issues like flat roofs. Increasingly through the early 1930s, they characterized the Bauhaus as a front for communists and social liberals. Indeed, when Meyer was fired in 1930, a number of communist students loyal to him moved to the Soviet Union.
Even before the Nazis came to power, political pressure on Bauhaus had increased. The Nazi movement, from nearly the start, denounced the Bauhaus for its "degenerate art", and the Nazi regime was determined to crack down on what it saw as the foreign, probably Jewish, influences of "cosmopolitan modernism". Despite Gropius's protestations that as a war veteran and a patriot his work had no subversive political intent, the Berlin Bauhaus was pressured to close in April 1933. Emigrants did succeed, however, in spreading the concepts of the Bauhaus to other countries, including the "New Bauhaus" of Chicago: Mies decided to emigrate to the United States for the directorship of the School of Architecture at the Armour Institute (now Illinois Institute of Technology) in Chicago and to seek building commissions. The simple engineering-oriented functionalism of stripped-down modernism, however, did lead to some Bauhaus influences living on in Nazi Germany. When Hitler's chief engineer, Fritz Todt, began opening the new autobahns (highways) in 1935, many of the bridges and service stations were "bold examples of modernism", and among those submitting designs was Mies van der Rohe.
Architectural output
The paradox of the early Bauhaus was that, although its manifesto proclaimed that the aim of all creative activity was building, the school did not offer classes in architecture until 1927. During the years under Gropius (1919–1927), he and his partner Adolf Meyer observed no real distinction between the output of his architectural office and the school. So the built output of Bauhaus architecture in these years is the output of Gropius: the Sommerfeld house in Berlin, the Otte house in Berlin, the Auerbach house in Jena, and the competition design for the Chicago Tribune Tower, which brought the school much attention. The definitive 1926 Bauhaus building in Dessau is also attributed to Gropius. Apart from contributions to the 1923 Haus am Horn, student architectural work amounted to un-built projects, interior finishes, and craft work like cabinets, chairs and pottery.
In the next two years under Meyer, the architectural focus shifted away from aesthetics and towards functionality. There were major commissions: one from the city of Dessau for five tightly designed "Laubenganghäuser" (apartment buildings with balcony access), which are still in use today, and another for the Bundesschule des Allgemeinen Deutschen Gewerkschaftsbundes (ADGB Trade Union School) in Bernau bei Berlin. Meyer's approach was to research users' needs and scientifically develop the design solution.
Mies van der Rohe repudiated Meyer's politics, his supporters, and his architectural approach. As opposed to Gropius's "study of essentials", and Meyer's research into user requirements, Mies advocated a "spatial implementation of intellectual decisions", which effectively meant an adoption of his own aesthetics. Neither Mies van der Rohe nor his Bauhaus students saw any projects built during the 1930s.
The popular conception of the Bauhaus as the source of extensive Weimar-era working housing is not accurate. Two projects, the apartment building project in Dessau and the Törten row housing also in Dessau, fall in that category, but developing worker housing was not the first priority of Gropius nor Mies. It was the Bauhaus contemporaries Bruno Taut, Hans Poelzig and particularly Ernst May, as the city architects of Berlin, Dresden and Frankfurt respectively, who are rightfully credited with the thousands of socially progressive housing units built in Weimar Germany. The housing Taut built in south-west Berlin during the 1920s, close to the U-Bahn stop Onkel Toms Hütte, is still occupied.
Impact
The Bauhaus had a major impact on art and architecture trends in Western Europe, Canada, the United States and Israel in the decades following its demise, as many of the artists involved fled, or were exiled by the Nazi regime. In 1996, four of the major sites associated with Bauhaus in Germany were inscribed on the UNESCO World Heritage List (with two more added in 2017).
In 1928, the Hungarian painter Alexander Bortnyik founded a school of design in Budapest called Műhely, which means "the studio". Located on the seventh floor of a house on Nagymezo Street, it was meant to be the Hungarian equivalent to the Bauhaus. The literature sometimes refers to it—in an oversimplified manner—as "the Budapest Bauhaus". Bortnyik was a great admirer of László Moholy-Nagy and had met Walter Gropius in Weimar between 1923 and 1925. Moholy-Nagy himself taught at the Műhely. Victor Vasarely, a pioneer of op art, studied at this school before establishing in Paris in 1930.
Walter Gropius, Marcel Breuer, and Moholy-Nagy re-assembled in Britain during the mid-1930s and lived and worked in the Isokon housing development in Lawn Road in London before the war caught up with them. Gropius and Breuer went on to teach at the Harvard Graduate School of Design and worked together before their professional split. Their collaboration produced, among other projects, the Aluminum City Terrace in New Kensington, Pennsylvania and the Alan I W Frank House in Pittsburgh. The Harvard School was enormously influential in America in the late 1920s and early 1930s, producing such students as Philip Johnson, I. M. Pei, Lawrence Halprin and Paul Rudolph, among many others.
In the late 1930s, Mies van der Rohe re-settled in Chicago, enjoyed the sponsorship of the influential Philip Johnson, and became one of the world's pre-eminent architects. Moholy-Nagy also went to Chicago and founded the New Bauhaus school under the sponsorship of industrialist and philanthropist Walter Paepcke. This school became the Institute of Design, part of the Illinois Institute of Technology. Printmaker and painter Werner Drewes was also largely responsible for bringing the Bauhaus aesthetic to America and taught at both Columbia University and Washington University in St. Louis. Herbert Bayer, sponsored by Paepcke, moved to Aspen, Colorado in support of Paepcke's Aspen projects at the Aspen Institute. In 1953, Max Bill, together with Inge Aicher-Scholl and Otl Aicher, founded the Ulm School of Design (German: Hochschule für Gestaltung – HfG Ulm) in Ulm, Germany, a design school in the tradition of the Bauhaus. The school is notable for its inclusion of semiotics as a field of study. The school closed in 1968, but the "Ulm Model" concept continues to influence international design education. Another series of projects at the school were the Bauhaus typefaces, mostly realized in the decades afterward.
The influence of the Bauhaus on design education was significant. One of the main objectives of the Bauhaus was to unify art, craft, and technology, and this approach was incorporated into the curriculum of the Bauhaus. The structure of the Bauhaus Vorkurs (preliminary course) reflected a pragmatic approach to integrating theory and application. In their first year, students learnt the basic elements and principles of design and colour theory, and experimented with a range of materials and processes. This approach to design education became a common feature of architectural and design school in many countries. For example, the Shillito Design School in Sydney stands as a unique link between Australia and the Bauhaus. The colour and design syllabus of the Shillito Design School was firmly underpinned by the theories and ideologies of the Bauhaus. Its first year foundational course mimicked the Vorkurs and focused on the elements and principles of design plus colour theory and application. The founder of the school, Phyllis Shillito, which opened in 1962 and closed in 1980, firmly believed that "A student who has mastered the basic principles of design, can design anything from a dress to a kitchen stove". In Britain, largely under the influence of painter and teacher William Johnstone, Basic Design, a Bauhaus-influenced art foundation course, was introduced at Camberwell School of Art and the Central School of Art and Design, whence it spread to all art schools in the country, becoming universal by the early 1960s.
One of the most important contributions of the Bauhaus is in the field of modern furniture design. The characteristic Cantilever chair and Wassily Chair designed by Marcel Breuer are two examples. (Breuer eventually lost a legal battle in Germany with Dutch architect/designer Mart Stam over patent rights to the cantilever chair design. Although Stam had worked on the design of the Bauhaus's 1923 exhibit in Weimar, and guest-lectured at the Bauhaus later in the 1920s, he was not formally associated with the school, and he and Breuer had worked independently on the cantilever concept, leading to the patent dispute.) The most profitable product of the Bauhaus was its wallpaper.
The physical plant at Dessau survived World War II and was operated as a design school with some architectural facilities by the German Democratic Republic. This included live stage productions in the Bauhaus theater under the name of Bauhausbühne ("Bauhaus Stage"). After German reunification, a reorganized school continued in the same building, with no essential continuity with the Bauhaus under Gropius in the early 1920s. In 1979 Bauhaus-Dessau College started to organize postgraduate programs with participants from all over the world. This effort has been supported by the Bauhaus-Dessau Foundation which was founded in 1974 as a public institution.
Later evaluation of the Bauhaus design credo was critical of its flawed recognition of the human element, an acknowledgment of "the dated, unattractive aspects of the Bauhaus as a projection of utopia marked by mechanistic views of human nature…Home hygiene without home atmosphere."
Subsequent examples which have continued the philosophy of the Bauhaus include Black Mountain College, Hochschule für Gestaltung in Ulm and Domaine de Boisbuchet.
The White City
The White City (Hebrew: העיר הלבנה, refers to a collection of over 4,000 buildings built in the Bauhaus or International Style in Tel Aviv from the 1930s by German Jewish architects who emigrated to the British Mandate of Palestine after the rise of the Nazis. Tel Aviv has the largest number of buildings in the Bauhaus/International Style of any city in the world. Preservation, documentation, and exhibitions have brought attention to Tel Aviv's collection of 1930s architecture. In 2003, the United Nations Educational, Scientific and Cultural Organization (UNESCO) proclaimed Tel Aviv's White City a World Cultural Heritage site, as "an outstanding example of new town planning and architecture in the early 20th century." The citation recognized the unique adaptation of modern international architectural trends to the cultural, climatic, and local traditions of the city. Bauhaus Center Tel Aviv organizes regular architectural tours of the city.
Centenary year, 2019
As the centenary of the founding of Bauhaus, several events, festivals, and exhibitions were held around the world in 2019. The international opening festival at the Berlin Academy of the Arts from 16 to 24 January concentrated on "the presentation and production of pieces by contemporary artists, in which the aesthetic issues and experimental configurations of the Bauhaus artists continue to be inspiringly contagious". Original Bauhaus, The Centenary Exhibition at the Berlinische Galerie (6 September 2019 to 27 January 2020) presented 1,000 original artefacts from the Bauhaus-Archiv's collection and recounted the history behind the objects.
The New European Bauhaus
In September 2020, President of the European Commission Ursula Von der Leyen introduced the New European Bauhaus (NEB) initiative during her State of the Union address. The NEB is a creative and interdisciplinary movement that connects the European Green Deal to everyday life. It is a platform for experimentation aiming to unite citizens, experts, businesses and institutions in imagining and designing a sustainable, aesthetic and inclusive future.
Sport and physical activity were an essential part of the original Bauhaus approach. Hannes Meyer, the second director of Bauhaus Dessau, ensured that one day a week was solely devoted to sport and gymnastics. 1 In 1930, Meyer employed two physical education teachers. The Bauhaus school even applied for public funds to enhance its playing field. The inclusion of sport and physical activity in the Bauhaus curriculum had various purposes. First, as Meyer put it, sport combatted a “one-sided emphasis on brainwork.” In addition, Bauhaus instructors believed that students could better express themselves if they actively experienced the space, rhythms and movements of the body. The Bauhaus approach also considered physical activity an important contributor to wellbeing and community spirit. Sport and physical activity were essential to the interdisciplinary Bauhaus movement that developed revolutionary ideas and continues to shape our environments today.
Bauhaus staff and students
People who were educated, or who taught or worked in other capacities, at the Bauhaus.
Gallery
See also
Art Deco architecture
Bauhaus Archive
Bauhaus Center Tel Aviv
Bauhaus Dessau Foundation
Bauhaus Museum, Tel Aviv
Bauhaus Museum, Weimar
Bauhaus World Heritage Site
Constructivist architecture
Expressionist architecture
Form follows function
Haus am Horn
IIT Institute of Design
International style (architecture)
Lucia Moholy
Max-Liebling House, Tel Aviv
Modern architecture
Neues Sehen (New Vision)
New Objectivity (architecture)
Swiss Style (design)
Ulm School of Design
Vkhutemas
Women of the Bauhaus
Explanatory footnotes
The closure, and the response of Mies van der Rohe, is fully documented in Elaine Hochman's Architects of Fortune.
Google honored Bauhaus for its 100th anniversary on 12 April 2019 with a Google Doodle.
Citations
General and cited references
Olaf Thormann: Bauhaus Saxony. arnoldsche Art Publishers 2019, .
Further reading
External links
Bauhaus Everywhere — Google Arts & Culture
Collection: Artists of the Bauhaus from the University of Michigan Museum of Art
1919 establishments in Germany
1933 disestablishments in Germany
Architecture in Germany
Architecture schools
Art movements
Design schools in Germany
Expressionist architecture
German architectural styles
Graphic design
Industrial design
Modernist architecture
Bauhaus, Dessau
Visual arts education
Bauhaus
Weimar culture
World Heritage Sites in Germany
|
https://en.wikipedia.org/wiki/Biostatistics
|
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.
History
Biostatistics and genetics
Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis.
Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment of population genetics and this synthesis all relied on statistics and developed its use in biology.
Ronald Fisher worked alongside statistician Betty Allan developing several basic statistical methods in support of his work studying the crop experiments at Rothamsted Research, published in Fisher's books Statistical Methods for Research Workers (1925) and The Genetical Theory of Natural Selection (1930), as well as Allan's scientific papers. Fisher went on to give many contributions to genetics and statistics. Some of them include the ANOVA, p-value concepts, Fisher's exact test and Fisher's equation for population dynamics. He is credited for the sentence "Natural selection is a mechanism for generating an exceedingly high degree of improbability".
Sewall G. Wright developed F-statistics and methods of computing them and defined inbreeding coefficient.
J. B. S. Haldane's book, The Causes of Evolution, reestablished natural selection as the premier mechanism of evolution by explaining it in terms of the mathematical consequences of Mendelian genetics. He also developed the theory of primordial soup.
These and other biostatisticians, mathematical biologists, and statistically inclined geneticists helped bring together evolutionary biology and genetics into a consistent, coherent whole that could begin to be quantitatively modeled.
In parallel to this overall development, the pioneering work of D'Arcy Thompson in On Growth and Form also helped to add quantitative discipline to biological study.
Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been a tendency among biologists to distrust or deprecate results which are not qualitatively apparent. One anecdote describes Thomas Hunt Morgan banning the Friden calculator from his department at Caltech, saying "Well, I am like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my department waste scarce resources in placer mining."
Research planning
Any research in life sciences is proposed to answer a scientific question we might have. To answer this question with a high certainty, we need accurate results. The correct definition of the main hypothesis and the research plan will reduce errors while taking a decision in understanding a phenomenon. The research plan might include the research question, the hypothesis to be tested, the experimental design, data collection methods, data analysis perspectives and costs involved. It is essential to carry the study based on the three basic principles of experimental statistics: randomization, replication, and local control.
Research question
The research question will define the objective of a study. The research will be headed by the question, so it needs to be concise, at the same time it is focused on interesting and novel topics that may improve science and knowledge and that field. To define the way to ask the scientific question, an exhaustive literature review might be necessary. So the research can be useful to add value to the scientific community.
Hypothesis definition
Once the aim of the study is defined, the possible answers to the research question can be proposed, transforming this question into a hypothesis. The main propose is called null hypothesis (H0) and is usually based on a permanent knowledge about the topic or an obvious occurrence of the phenomena, sustained by a deep literature review. We can say it is the standard expected answer for the data under the situation in test. In general, HO assumes no association between treatments. On the other hand, the alternative hypothesis is the denial of HO. It assumes some degree of association between the treatment and the outcome. Although, the hypothesis is sustained by question research and its expected and unexpected answers.
As an example, consider groups of similar animals (mice, for example) under two different diet systems. The research question would be: what is the best diet? In this case, H0 would be that there is no difference between the two diets in mice metabolism (H0: μ1 = μ2) and the alternative hypothesis would be that the diets have different effects over animals metabolism (H1: μ1 ≠ μ2).
The hypothesis is defined by the researcher, according to his/her interests in answering the main question. Besides that, the alternative hypothesis can be more than one hypothesis. It can assume not only differences across observed parameters, but their degree of differences (i.e. higher or shorter).
Sampling
Usually, a study aims to understand an effect of a phenomenon over a population. In biology, a population is defined as all the individuals of a given species, in a specific area at a given time. In biostatistics, this concept is extended to a variety of collections possible of study. Although, in biostatistics, a population is not only the individuals, but the total of one specific component of their organisms, as the whole genome, or all the sperm cells, for animals, or the total leaf area, for a plant, for example.
It is not possible to take the measures from all the elements of a population. Because of that, the sampling process is very important for statistical inference. Sampling is defined as to randomly get a representative part of the entire population, to make posterior inferences about the population. So, the sample might catch the most variability across a population. The sample size is determined by several things, since the scope of the research to the resources available. In clinical research, the trial type, as inferiority, equivalence, and superiority is a key in determining sample size.
Experimental design
Experimental designs sustain those basic principles of experimental statistics. There are three basic experimental designs to randomly allocate treatments in all plots of the experiment. They are completely randomized design, randomized block design, and factorial designs. Treatments can be arranged in many ways inside the experiment. In agriculture, the correct experimental design is the root of a good study and the arrangement of treatments within the study is essential because environment largely affects the plots (plants, livestock, microorganisms). These main arrangements can be found in the literature under the names of "lattices", "incomplete blocks", "split plot", "augmented blocks", and many others. All of the designs might include control plots, determined by the researcher, to provide an error estimation during inference.
In clinical studies, the samples are usually smaller than in other biological studies, and in most cases, the environment effect can be controlled or measured. It is common to use randomized controlled clinical trials, where results are usually compared with observational study designs such as case–control or cohort.
Data collection
Data collection methods must be considered in research planning, because it highly influences the sample size and experimental design.
Data collection varies according to type of data. For qualitative data, collection can be done with structured questionnaires or by observation, considering presence or intensity of disease, using score criterion to categorize levels of occurrence. For quantitative data, collection is done by measuring numerical information using instruments.
In agriculture and biology studies, yield data and its components can be obtained by metric measures. However, pest and disease injuries in plats are obtained by observation, considering score scales for levels of damage. Especially, in genetic studies, modern methods for data collection in field and laboratory should be considered, as high-throughput platforms for phenotyping and genotyping. These tools allow bigger experiments, while turn possible evaluate many plots in lower time than a human-based only method for data collection.
Finally, all data collected of interest must be stored in an organized data frame for further analysis.
Analysis and data interpretation
Descriptive tools
Data can be represented through tables or graphical representation, such as line charts, bar charts, histograms, scatter plot. Also, measures of central tendency and variability can be very useful to describe an overview of the data. Follow some examples:
Frequency tables
One type of tables are the frequency table, which consists of data arranged in rows and columns, where the frequency is the number of occurrences or repetitions of data. Frequency can be:
Absolute: represents the number of times that a determined value appear;
Relative: obtained by the division of the absolute frequency by the total number;
In the next example, we have the number of genes in ten operons of the same organism.
Line graph
Line graphs represent the variation of a value over another metric, such as time. In general, values are represented in the vertical axis, while the time variation is represented in the horizontal axis.
Bar chart
A bar chart is a graph that shows categorical data as bars presenting heights (vertical bar) or widths (horizontal bar) proportional to represent values. Bar charts provide an image that could also be represented in a tabular format.
In the bar chart example, we have the birth rate in Brazil for the December months from 2010 to 2016. The sharp fall in December 2016 reflects the outbreak of Zika virus in the birth rate in Brazil.
Histograms
The histogram (or frequency distribution) is a graphical representation of a dataset tabulated and divided into uniform or non-uniform classes. It was first introduced by Karl Pearson.
Scatter plot
A scatter plot is a mathematical diagram that uses Cartesian coordinates to display values of a dataset. A scatter plot shows the data as a set of points, each one presenting the value of one variable determining the position on the horizontal axis and another variable on the vertical axis. They are also called scatter graph, scatter chart, scattergram, or scatter diagram.
Mean
The arithmetic mean is the sum of a collection of values () divided by the number of items of this collection ().
Median
The median is the value in the middle of a dataset.
Mode
The mode is the value of a set of data that appears most often.
Box plot
Box plot is a method for graphically depicting groups of numerical data. The maximum and minimum values are represented by the lines, and the interquartile range (IQR) represent 25–75% of the data. Outliers may be plotted as circles.
Correlation coefficients
Although correlations between two different kinds of data could be inferred by graphs, such as scatter plot, it is necessary validate this though numerical information. For this reason, correlation coefficients are required. They provide a numerical value that reflects the strength of an association.
Pearson correlation coefficient
Pearson correlation coefficient is a measure of association between two variables, X and Y. This coefficient, usually represented by ρ (rho) for the population and r for the sample, assumes values between −1 and 1, where ρ = 1 represents a perfect positive correlation, ρ = −1 represents a perfect negative correlation, and ρ = 0 is no linear correlation.
Inferential statistics
It is used to make inferences about an unknown population, by estimation and/or hypothesis testing. In other words, it is desirable to obtain parameters to describe the population of interest, but since the data is limited, it is necessary to make use of a representative sample in order to estimate them. With that, it is possible to test previously defined hypotheses and apply the conclusions to the entire population. The standard error of the mean is a measure of variability that is crucial to do inferences.
Hypothesis testing
Hypothesis testing is essential to make inferences about populations aiming to answer research questions, as settled in "Research planning" section. Authors defined four steps to be set:
The hypothesis to be tested: as stated earlier, we have to work with the definition of a null hypothesis (H0), that is going to be tested, and an alternative hypothesis. But they must be defined before the experiment implementation.
Significance level and decision rule: A decision rule depends on the level of significance, or in other words, the acceptable error rate (α). It is easier to think that we define a critical value that determines the statistical significance when a test statistic is compared with it. So, α also has to be predefined before the experiment.
Experiment and statistical analysis: This is when the experiment is really implemented following the appropriate experimental design, data is collected and the more suitable statistical tests are evaluated.
Inference: Is made when the null hypothesis is rejected or not rejected, based on the evidence that the comparison of p-values and α brings. It is pointed that the failure to reject H0 just means that there is not enough evidence to support its rejection, but not that this hypothesis is true.
Confidence intervals
A confidence interval is a range of values that can contain the true real parameter value in given a certain level of confidence. The first step is to estimate the best-unbiased estimate of the population parameter. The upper value of the interval is obtained by the sum of this estimate with the multiplication between the standard error of the mean and the confidence level. The calculation of lower value is similar, but instead of a sum, a subtraction must be applied.
Statistical considerations
Power and statistical error
When testing a hypothesis, there are two types of statistic errors possible: Type I error and Type II error. The type I error or false positive is the incorrect rejection of a true null hypothesis and the type II error or false negative is the failure to reject a false null hypothesis. The significance level denoted by α is the type I error rate and should be chosen before performing the test. The type II error rate is denoted by β and statistical power of the test is 1 − β.
p-value
The p-value is the probability of obtaining results as extreme as or more extreme than those observed, assuming the null hypothesis (H0) is true. It is also called the calculated probability. It is common to confuse the p-value with the significance level (α), but, the α is a predefined threshold for calling significant results. If p is less than α, the null hypothesis (H0) is rejected.
Multiple testing
In multiple tests of the same hypothesis, the probability of the occurrence of falses positives (familywise error rate) increase and some strategy are used to control this occurrence. This is commonly achieved by using a more stringent threshold to reject null hypotheses. The Bonferroni correction defines an acceptable global significance level, denoted by α* and each test is individually compared with a value of α = α*/m. This ensures that the familywise error rate in all m tests, is less than or equal to α*. When m is large, the Bonferroni correction may be overly conservative. An alternative to the Bonferroni correction is to control the false discovery rate (FDR). The FDR controls the expected proportion of the rejected null hypotheses (the so-called discoveries) that are false (incorrect rejections). This procedure ensures that, for independent tests, the false discovery rate is at most q*. Thus, the FDR is less conservative than the Bonferroni correction and have more power, at the cost of more false positives.
Mis-specification and robustness checks
The main hypothesis being tested (e.g., no association between treatments and outcomes) is often accompanied by other technical assumptions (e.g., about the form of the probability distribution of the outcomes) that are also part of the null hypothesis. When the technical assumptions are violated in practice, then the null may be frequently rejected even if the main hypothesis is true. Such rejections are said to be due to model mis-specification. Verifying whether the outcome of a statistical test does not change when the technical assumptions are slightly altered (so-called robustness checks) is the main way of combating mis-specification.
Model selection criteria
Model criteria selection will select or model that more approximate true model. The Akaike's Information Criterion (AIC) and The Bayesian Information Criterion (BIC) are examples of asymptotically efficient criteria.
Developments and big data
Recent developments have made a large impact on biostatistics. Two important changes have been the ability to collect data on a high-throughput scale, and the ability to perform much more complex analysis using computational techniques. This comes from the development in areas as sequencing technologies, Bioinformatics and Machine learning (Machine learning in bioinformatics).
Use in high-throughput data
New biomedical technologies like microarrays, next-generation sequencers (for genomics) and mass spectrometry (for proteomics) generate enormous amounts of data, allowing many tests to be performed simultaneously. Careful analysis with biostatistical methods is required to separate the signal from the noise. For example, a microarray could be used to measure many thousands of genes simultaneously, determining which of them have different expression in diseased cells compared to normal cells. However, only a fraction of genes will be differentially expressed.
Multicollinearity often occurs in high-throughput biostatistical settings. Due to high intercorrelation between the predictors (such as gene expression levels), the information of one predictor might be contained in another one. It could be that only 5% of the predictors are responsible for 90% of the variability of the response. In such a case, one could apply the biostatistical technique of dimension reduction (for example via principal component analysis). Classical statistical techniques like linear or logistic regression and linear discriminant analysis do not work well for high dimensional data (i.e. when the number of observations n is smaller than the number of features or predictors p: n < p). As a matter of fact, one can get quite high R2-values despite very low predictive power of the statistical model. These classical statistical techniques (esp. least squares linear regression) were developed for low dimensional data (i.e. where the number of observations n is much larger than the number of predictors p: n >> p). In cases of high dimensionality, one should always consider an independent validation test set and the corresponding residual sum of squares (RSS) and R2 of the validation test set, not those of the training set.
Often, it is useful to pool information from multiple predictors together. For example, Gene Set Enrichment Analysis (GSEA) considers the perturbation of whole (functionally related) gene sets rather than of single genes. These gene sets might be known biochemical pathways or otherwise functionally related genes. The advantage of this approach is that it is more robust: It is more likely that a single gene is found to be falsely perturbed than it is that a whole pathway is falsely perturbed. Furthermore, one can integrate the accumulated knowledge about biochemical pathways (like the JAK-STAT signaling pathway) using this approach.
Bioinformatics advances in databases, data mining, and biological interpretation
The development of biological databases enables storage and management of biological data with the possibility of ensuring access for users around the world. They are useful for researchers depositing data, retrieve information and files (raw or processed) originated from other experiments or indexing scientific articles, as PubMed. Another possibility is search for the desired term (a gene, a protein, a disease, an organism, and so on) and check all results related to this search. There are databases dedicated to SNPs (dbSNP), the knowledge on genes characterization and their pathways (KEGG) and the description of gene function classifying it by cellular component, molecular function and biological process (Gene Ontology). In addition to databases that contain specific molecular information, there are others that are ample in the sense that they store information about an organism or group of organisms. As an example of a database directed towards just one organism, but that contains much data about it, is the Arabidopsis thaliana genetic and molecular database – TAIR. Phytozome, in turn, stores the assemblies and annotation files of dozen of plant genomes, also containing visualization and analysis tools. Moreover, there is an interconnection between some databases in the information exchange/sharing and a major initiative was the International Nucleotide Sequence Database Collaboration (INSDC) which relates data from DDBJ, EMBL-EBI, and NCBI.
Nowadays, increase in size and complexity of molecular datasets leads to use of powerful statistical methods provided by computer science algorithms which are developed by machine learning area. Therefore, data mining and machine learning allow detection of patterns in data with a complex structure, as biological ones, by using methods of supervised and unsupervised learning, regression, detection of clusters and association rule mining, among others. To indicate some of them, self-organizing maps and k-means are examples of cluster algorithms; neural networks implementation and support vector machines models are examples of common machine learning algorithms.
Collaborative work among molecular biologists, bioinformaticians, statisticians and computer scientists is important to perform an experiment correctly, going from planning, passing through data generation and analysis, and ending with biological interpretation of the results.
Use of computationally intensive methods
On the other hand, the advent of modern computer technology and relatively cheap computing resources have enabled computer-intensive biostatistical methods like bootstrapping and re-sampling methods.
In recent times, random forests have gained popularity as a method for performing statistical classification. Random forest techniques generate a panel of decision trees. Decision trees have the advantage that you can draw them and interpret them (even with a basic understanding of mathematics and statistics). Random Forests have thus been used for clinical decision support systems.
Applications
Public health
Public health, including epidemiology, health services research, nutrition, environmental health and health care policy & management. In these medicine contents, it's important to consider the design and analysis of the clinical trials. As one example, there is the assessment of severity state of a patient with a prognosis of an outcome of a disease.
With new technologies and genetics knowledge, biostatistics are now also used for Systems medicine, which consists in a more personalized medicine. For this, is made an integration of data from different sources, including conventional patient data, clinico-pathological parameters, molecular and genetic data as well as data generated by additional new-omics technologies.
Quantitative genetics
The study of Population genetics and Statistical genetics in order to link variation in genotype with a variation in phenotype. In other words, it is desirable to discover the genetic basis of a measurable trait, a quantitative trait, that is under polygenic control. A genome region that is responsible for a continuous trait is called Quantitative trait locus (QTL). The study of QTLs become feasible by using molecular markers and measuring traits in populations, but their mapping needs the obtaining of a population from an experimental crossing, like an F2 or Recombinant inbred strains/lines (RILs). To scan for QTLs regions in a genome, a gene map based on linkage have to be built. Some of the best-known QTL mapping algorithms are Interval Mapping, Composite Interval Mapping, and Multiple Interval Mapping.
However, QTL mapping resolution is impaired by the amount of recombination assayed, a problem for species in which it is difficult to obtain large offspring. Furthermore, allele diversity is restricted to individuals originated from contrasting parents, which limit studies of allele diversity when we have a panel of individuals representing a natural population. For this reason, the Genome-wide association study was proposed in order to identify QTLs based on linkage disequilibrium, that is the non-random association between traits and molecular markers. It was leveraged by the development of high-throughput SNP genotyping.
In animal and plant breeding, the use of markers in selection aiming for breeding, mainly the molecular ones, collaborated to the development of marker-assisted selection. While QTL mapping is limited due resolution, GWAS does not have enough power when rare variants of small effect that are also influenced by environment. So, the concept of Genomic Selection (GS) arises in order to use all molecular markers in the selection and allow the prediction of the performance of candidates in this selection. The proposal is to genotype and phenotype a training population, develop a model that can obtain the genomic estimated breeding values (GEBVs) of individuals belonging to a genotype and but not phenotype population, called testing population. This kind of study could also include a validation population, thinking in the concept of cross-validation, in which the real phenotype results measured in this population are compared with the phenotype results based on the prediction, what used to check the accuracy of the model.
As a summary, some points about the application of quantitative genetics are:
This has been used in agriculture to improve crops (Plant breeding) and livestock (Animal breeding).
In biomedical research, this work can assist in finding candidates gene alleles that can cause or influence predisposition to diseases in human genetics
Expression data
Studies for differential expression of genes from RNA-Seq data, as for RT-qPCR and microarrays, demands comparison of conditions. The goal is to identify genes which have a significant change in abundance between different conditions. Then, experiments are designed appropriately, with replicates for each condition/treatment, randomization and blocking, when necessary. In RNA-Seq, the quantification of expression uses the information of mapped reads that are summarized in some genetic unit, as exons that are part of a gene sequence. As microarray results can be approximated by a normal distribution, RNA-Seq counts data are better explained by other distributions. The first used distribution was the Poisson one, but it underestimate the sample error, leading to false positives. Currently, biological variation is considered by methods that estimate a dispersion parameter of a negative binomial distribution. Generalized linear models are used to perform the tests for statistical significance and as the number of genes is high, multiple tests correction have to be considered. Some examples of other analysis on genomics data comes from microarray or proteomics experiments. Often concerning diseases or disease stages.
Other studies
Ecology, ecological forecasting
Biological sequence analysis
Systems biology for gene network inference or pathways analysis.
Clinical research and pharmaceutical development
Population dynamics, especially in regards to fisheries science.
Phylogenetics and evolution
Pharmacodynamics
Pharmacokinetics
Neuroimaging
Tools
There are a lot of tools that can be used to do statistical analysis in biological data. Most of them are useful in other areas of knowledge, covering a large number of applications (alphabetical). Here are brief descriptions of some of them:
ASReml: Another software developed by VSNi that can be used also in R environment as a package. It is developed to estimate variance components under a general linear mixed model using restricted maximum likelihood (REML). Models with fixed effects and random effects and nested or crossed ones are allowed. Gives the possibility to investigate different variance-covariance matrix structures.
CycDesigN: A computer package developed by VSNi that helps the researchers create experimental designs and analyze data coming from a design present in one of three classes handled by CycDesigN. These classes are resolvable, non-resolvable, partially replicated and crossover designs. It includes less used designs the Latinized ones, as t-Latinized design.
Orange: A programming interface for high-level data processing, data mining and data visualization. Include tools for gene expression and genomics.
R: An open source environment and programming language dedicated to statistical computing and graphics. It is an implementation of S language maintained by CRAN. In addition to its functions to read data tables, take descriptive statistics, develop and evaluate models, its repository contains packages developed by researchers around the world. This allows the development of functions written to deal with the statistical analysis of data that comes from specific applications. In the case of Bioinformatics, for example, there are packages located in the main repository (CRAN) and in others, as Bioconductor. It is also possible to use packages under development that are shared in hosting-services as GitHub.
SAS: A data analysis software widely used, going through universities, services and industry. Developed by a company with the same name (SAS Institute), it uses SAS language for programming.
PLA 3.0: Is a biostatistical analysis software for regulated environments (e.g. drug testing) which supports Quantitative Response Assays (Parallel-Line, Parallel-Logistics, Slope-Ratio) and Dichotomous Assays (Quantal Response, Binary Assays). It also supports weighting methods for combination calculations and the automatic data aggregation of independent assay data.
Weka: A Java software for machine learning and data mining, including tools and methods for visualization, clustering, regression, association rule, and classification. There are tools for cross-validation, bootstrapping and a module of algorithm comparison. Weka also can be run in other programming languages as Perl or R.
Python (programming language) image analysis, deep-learning, machine-learning
SQL databases
NoSQL
NumPy numerical python
SciPy
SageMath
LAPACK linear algebra
MATLAB
Apache Hadoop
Apache Spark
Amazon Web Services
Scope and training programs
Almost all educational programmes in biostatistics are at postgraduate level. They are most often found in schools of public health, affiliated with schools of medicine, forestry, or agriculture, or as a focus of application in departments of statistics.
In the United States, where several universities have dedicated biostatistics departments, many other top-tier universities integrate biostatistics faculty into statistics or other departments, such as epidemiology. Thus, departments carrying the name "biostatistics" may exist under quite different structures. For instance, relatively new biostatistics departments have been founded with a focus on bioinformatics and computational biology, whereas older departments, typically affiliated with schools of public health, will have more traditional lines of research involving epidemiological studies and clinical trials as well as bioinformatics. In larger universities around the world, where both a statistics and a biostatistics department exist, the degree of integration between the two departments may range from the bare minimum to very close collaboration. In general, the difference between a statistics program and a biostatistics program is twofold: (i) statistics departments will often host theoretical/methodological research which are less common in biostatistics programs and (ii) statistics departments have lines of research that may include biomedical applications but also other areas such as industry (quality control), business and economics and biological areas other than medicine.
Specialized journals
Biostatistics
International Journal of Biostatistics
Journal of Epidemiology and Biostatistics
Biostatistics and Public Health
Biometrics
Biometrika
Biometrical Journal
Communications in Biometry and Crop Science
Statistical Applications in Genetics and Molecular Biology
Statistical Methods in Medical Research
Pharmaceutical Statistics
Statistics in Medicine
See also
Bioinformatics
Epidemiological method
Epidemiology
Group size measures
Health indicator
Mathematical and theoretical biology
References
External links
The International Biometric Society
The Collection of Biostatistics Research Archive
Guide to Biostatistics (MedPageToday.com)
Biomedical Statistics
Bioinformatics
|
https://en.wikipedia.org/wiki/Braille
|
Braille ( , ) is a tactile writing system used by people who are visually impaired. It can be read either on embossed paper or by using refreshable braille displays that connect to computers and smartphone devices. Braille can be written using a slate and stylus, a braille writer, an electronic braille notetaker or with the use of a computer connected to a braille embosser.
Braille is named after its creator, Louis Braille, a Frenchman who lost his sight as a result of a childhood accident. In 1824, at the age of fifteen, he developed the braille code based on the French alphabet as an improvement on night writing. He published his system, which subsequently included musical notation, in 1829. The second revision, published in 1837, was the first binary form of writing developed in the modern era.
Braille characters are formed using a combination of six raised dots arranged in a 3 × 2 matrix, called the braille cell. The number and arrangement of these dots distinguishes one character from another. Since the various braille alphabets originated as transcription codes for printed writing, the mappings (sets of character designations) vary from language to language, and even within one; in English Braille there are 3 levels of braille: uncontracted braille a letter-by-letter transcription used for basic literacy; contracted braille an addition of abbreviations and contractions used as a space-saving mechanism; and grade 3 various non-standardized personal stenography that is less commonly used.
In addition to braille text (letters, punctuation, contractions), it is also possible to create embossed illustrations and graphs, with the lines either solid or made of series of dots, arrows, and bullets that are larger than braille dots. A full braille cell includes six raised dots arranged in two columns, each column having three dots. The dot positions are identified by numbers from one to six. There are 64 possible combinations, including no dots at all for a word space. Dot configurations can be used to represent a letter, digit, punctuation mark, or even a word.
Early braille education is crucial to literacy, education and employment among the blind. Despite the evolution of new technologies, including screen reader software that reads information aloud, braille provides blind people with access to spelling, punctuation and other aspects of written language less accessible through audio alone.
While some have suggested that audio-based technologies will decrease the need for braille, technological advancements such as braille displays have continued to make braille more accessible and available. Braille users highlight that braille remains as essential as print is to the sighted.
History
Braille was based on a tactile code, now known as night writing, developed by Charles Barbier. (The name "night writing" was later given to it when it was considered as a means for soldiers to communicate silently at night and without a light source, but Barbier's writings do not use this term and suggest that it was originally designed as a simpler form of writing and for the visually impaired.) In Barbier's system, sets of 12 embossed dots were used to encode 36 different sounds. Braille identified three major defects of the code: first, the symbols represented phonetic sounds and not letters of the alphabetthus the code was unable to render the orthography of the words. Second, the 12-dot symbols could not easily fit beneath the pad of the reading finger. This required the reading finger to move in order to perceive the whole symbol, which slowed the reading process. (This was because Barbier's system was based only on the number of dots in each of two 6-dot columns but not the pattern of the dots.) Third, the code did not include symbols for numerals or punctuation. Braille's solution was to use 6-dot cells and to assign a specific pattern to each letter of the alphabet. Braille also developed symbols for representing numerals and punctuation.
At first, Braille was a one-to-one transliteration of the French alphabet, but soon various abbreviations (contractions) and even logograms were developed, creating a system much more like shorthand.
Today, there are braille codes for over 133 languages.
In English, some variations in the braille codes have traditionally existed among English-speaking countries. In 1991, work to standardize the braille codes used in the English-speaking world began. Unified English Braille (UEB) has been adopted in all seven member countries of the International Council on English Braille (ICEB) as well as Nigeria.
For blind readers, Braille is an independent writing system, rather than a code of printed orthography.
Derivation
Braille is derived from the Latin alphabet, albeit indirectly. In Braille's original system, the dot patterns were assigned to letters according to their position within the alphabetic order of the French alphabet of the time, with accented letters and w sorted at the end.
Unlike print, which consists of mostly arbitrary symbols, the braille alphabet follows a logical sequence. The first ten letters of the alphabet, a–j, use the upper four dot positions: (black dots in the table below). These stand for the ten digits 1–9 and 0 in an alphabetic numeral system similar to Greek numerals (as well as derivations of it, including Hebrew numerals, Cyrillic numerals, Abjad numerals, also Hebrew gematria and Greek isopsephy).
Though the dots are assigned in no obvious order, the cells with the fewest dots are assigned to the first three letters (and lowest digits), abc = 123 (), and to the three vowels in this part of the alphabet, aei (), whereas the even digits, 4, 6, 8, 0 (), are corners/right angles.
The next ten letters, k–t, are identical to a–j respectively, apart from the addition of a dot at position 3 (red dots in the bottom left corner of the cell in the table below): :
{| class="wikitable" style="text-align:center"
|+ Derivation (colored dots) of the 26 braille letters of the basic Latin alphabet from the 10 numeric digits (black dots)
|-
|||||||||||||||||||
|-
|a/1||b/2||c/3||d/4||e/5||f/6||g/7||h/8||i/9||j/0
|-
|||||||||||||||||||
|-
|k||l||m||n||o||p||q||r||s||t
|-
||||||||||| colspan="4" rowspan="2" | ||
|-
|u||v||x||y||z||w
|}
The next ten letters (the next "decade") are the same again, but with dots also at both position 3 and position 6 (green dots in the bottom row of the cell in the table above). Here w was initially left out as not being a part of the official French alphabet at the time of Braille's life; the French braille order is u v x y z ç é à è ù ().
The next ten letters, ending in w, are the same again, except that for this series position 6 (purple dot in the bottom right corner of the cell in the table above) is used without a dot at position 3. In French braille these are the letters â ê î ô û ë ï ü œ w (). W had been tacked onto the end of 39 letters of the French alphabet to accommodate English.
The a–j series shifted down by one dot space () is used for punctuation. Letters a and c , which only use dots in the top row, were shifted two places for the apostrophe and hyphen: . (These are also the decade diacritics, at left in the table below, of the second and third decade.)
In addition, there are ten patterns that are based on the first two letters () with their dots shifted to the right; these were assigned to non-French letters (ì ä ò ), or serve non-letter functions: (superscript; in English the accent mark), (currency prefix), (capital, in English the decimal point), (number sign), (emphasis mark), (symbol prefix).
{| class="wikitable noresize" styel="text-align:center"
|+ The 64 modern braille cells
!colspan=2| decade || ||colspan=10| numeric sequence || ||colspan=2| shift right
|-
!1st
| ||
|
|
|
|
|
|
|
|
|
| ||
|
|
|-
!2nd
| ||
|
|
|
|
|
|
|
|
|
| ||
|
|
|-
!3rd
| ||
|
|
|
|
|
|
|
|
|
| ||
|
|
|-
!4th
| ||
|
|
|
|
|
|
|
|
|
| ||
|
|
|-
!5th
! shiftdown
|
|
|
|
|
|
|
|
|
|
| ||
|
|
|}
The first four decades are similar in respect that in those decades the decade dots are applied to the numeric sequence as a logical "inclusive OR" operation whereas the fifth decade applies a "shift down" operation to the numeric sequence.
Originally there had been nine decades. The fifth through ninth used dashes as well as dots, but proved to be impractical and were soon abandoned. These could be replaced with what we now know as the number sign (), though that only caught on for the digits (old 5th decade → modern 1st decade). The dash occupying the top row of the original sixth decade was simply dropped, producing the modern fifth decade. (See 1829 braille.)
Assignment
Historically, there have been three principles in assigning the values of a linear script (print) to Braille: Using Louis Braille's original French letter values; reassigning the braille letters according to the sort order of the print alphabet being transcribed; and reassigning the letters to improve the efficiency of writing in braille.
Under international consensus, most braille alphabets follow the French sorting order for the 26 letters of the basic Latin alphabet, and there have been attempts at unifying the letters beyond these 26 (see international braille), though differences remain, for example, in German Braille. This unification avoids the chaos of each nation reordering the braille code to match the sorting order of its print alphabet, as happened in Algerian Braille, where braille codes were numerically reassigned to match the order of the Arabic alphabet and bear little relation to the values used in other countries (compare modern Arabic Braille, which uses the French sorting order), and as happened in an early American version of English Braille, where the letters w, x, y, z were reassigned to match English alphabetical order. A convention sometimes seen for letters beyond the basic 26 is to exploit the physical symmetry of braille patterns iconically, for example, by assigning a reversed n to ñ or an inverted s to sh. (See Hungarian Braille and Bharati Braille, which do this to some extent.)
A third principle was to assign braille codes according to frequency, with the simplest patterns (quickest ones to write with a stylus) assigned to the most frequent letters of the alphabet. Such frequency-based alphabets were used in Germany and the United States in the 19th century (see American Braille), but with the invention of the braille typewriter their advantage disappeared, and none are attested in modern use they had the disadvantage that the resulting small number of dots in a text interfered with following the alignment of the letters, and consequently made texts more difficult to read than Braille's more arbitrary letter assignment. Finally, there are braille scripts that do not order the codes numerically at all, such as Japanese Braille and Korean Braille, which are based on more abstract principles of syllable composition.
Texts are sometimes written in a script of eight dots per cell rather than six, enabling them to encode a greater number of symbols. (See Gardner–Salinas braille codes.) Luxembourgish Braille has adopted eight-dot cells for general use; for example, it adds a dot below each letter to derive its capital variant.
Form
Braille was the first writing system with binary encoding. The system as devised by Braille consists of two parts:
Character encoding that mapped characters of the French alphabet to tuples of six bits (the dots).
The physical representation of those six-bit characters with raised dots in a braille cell.
Within an individual cell, the dot positions are arranged in two columns of three positions. A raised dot can appear in any of the six positions, producing 64 (26) possible patterns, including one in which there are no raised dots. For reference purposes, a pattern is commonly described by listing the positions where dots are raised, the positions being universally numbered, from top to bottom, as 1 to 3 on the left and 4 to 6 on the right. For example, dot pattern 1-3-4 describes a cell with three dots raised, at the top and bottom in the left column and at the top of the right column: that is, the letter m. The lines of horizontal braille text are separated by a space, much like visible printed text, so that the dots of one line can be differentiated from the braille text above and below. Different assignments of braille codes (or code pages) are used to map the character sets of different printed scripts to the six-bit cells. Braille assignments have also been created for mathematical and musical notation. However, because the six-dot braille cell allows only 64 (26) patterns, including space, the characters of a braille script commonly have multiple values, depending on their context. That is, character mapping between print and braille is not one-to-one. For example, the character corresponds in print to both the letter d and the digit 4.
In addition to simple encoding, many braille alphabets use contractions to reduce the size of braille texts and to increase reading speed. (See Contracted braille.)
Writing braille
Braille may be produced by hand using a slate and stylus in which each dot is created from the back of the page, writing in mirror image, or it may be produced on a braille typewriter or Perkins Brailler, or an electronic Brailler or braille notetaker. Braille users with access to smartphones may also activate the on-screen braille input keyboard, to type braille symbols on to their device by placing their fingers on to the screen according to the dot configuration of the symbols they wish to form. These symbols are automatically translated into print on the screen. The different tools that exist for writing braille allow the braille user to select the method that is best for a given task. For example, the slate and stylus is a portable writing tool, much like the pen and paper for the sighted. Errors can be erased using a braille eraser or can be overwritten with all six dots (). Interpoint refers to braille printing that is offset, so that the paper can be embossed on both sides, with the dots on one side appearing between the divots that form the dots on the other.
Using a computer or other electronic device, Braille may be produced with a braille embosser (printer) or a refreshable braille display (screen).
Eight-dot braille
Braille has been extended to an 8-dot code, particularly for use with braille embossers and refreshable braille displays. In 8-dot braille the additional dots are added at the bottom of the cell, giving a matrix 4 dots high by 2 dots wide. The additional dots are given the numbers 7 (for the lower-left dot) and 8 (for the lower-right dot). Eight-dot braille has the advantages that the case of an individual letter is directly coded in the cell containing the letter and that all the printable ASCII characters can be represented in a single cell. All 256 (28) possible combinations of 8 dots are encoded by the Unicode standard. Braille with six dots is frequently stored as Braille ASCII.
Letters
The first 25 braille letters, up through the first half of the 3rd decade, transcribe a–z (skipping w). In English Braille, the rest of that decade is rounded out with the ligatures and, for, of, the, and with. Omitting dot 3 from these forms the 4th decade, the ligatures ch, gh, sh, th, wh, ed, er, ou, ow and the letter w.
(See English Braille.)
Formatting
Various formatting marks affect the values of the letters that follow them. They have no direct equivalent in print. The most important in English Braille are:
That is, is read as capital 'A', and as the digit '1'.
Punctuation
Basic punctuation marks in English Braille include:
is both the question mark and the opening quotation mark. Its reading depends on whether it occurs before a word or after.
is used for both opening and closing parentheses. Its placement relative to spaces and other characters determines its interpretation.
Punctuation varies from language to language. For example, French Braille uses for its question mark and swaps the quotation marks and parentheses (to and ); it uses the period () for the decimal point, as in print, and the decimal point () to mark capitalization.
Contractions
Braille contractions are words and affixes that are shortened so that they take up fewer cells. In English Braille, for example, the word afternoon is written with just three letters, , much like stenoscript. There are also several abbreviation marks that create what are effectively logograms. The most common of these is dot 5, which combines with the first letter of words. With the letter m, the resulting word is mother. There are also ligatures ("contracted" letters), which are single letters in braille but correspond to more than one letter in print. The letter and, for example, is used to write words with the sequence a-n-d in them, such as hand.
Page dimensions
Most braille embossers support between 34 and 40 cells per line, and 25 lines per page.
A manually operated Perkins braille typewriter supports a maximum of 42 cells per line (its margins are adjustable), and typical paper allows 25 lines per page.
A large interlining Stainsby has 36 cells per line and 18 lines per page.
An A4-sized Marburg braille frame, which allows interpoint braille (dots on both sides of the page, offset so they do not interfere with each other), has 30 cells per line and 27 lines per page.
Braille writing machine
A Braille writing machine is a typewriter with six keys that allows the user to write braille on a regular hard copy page.
The first Braille typewriter to gain general acceptance was invented by Frank Haven Hall (Superintendent of the Illinois School for the Blind), and was presented to the public in 1892.
The Stainsby Brailler, developed by Henry Stainsby in 1903, is a mechanical writer with a sliding carriage that moves over an aluminium plate as it embosses Braille characters. An improved version was introduced around 1933.
In 1951 David Abraham, a woodworking teacher at the Perkins School for the Blind, produced a more advanced Braille typewriter, the Perkins Brailler.
Braille printers or embosser were produced in the 1950s.
In 1960 Robert Mann, a teacher in MIT, wrote DOTSYS, a software that allowed automatic braille translation, and another group created an embossing device called "M.I.T. Braillemboss". The Mitre Corporation team of Robert Gildea, Jonathan Millen, Reid Gerhart and Joseph Sullivan (now president of Duxbury Systems) developed DOTSYS III, the first braille translator written in a portable programming language. DOTSYS III was developed for the Atlanta Public Schools as a public domain program.
In 1991 Ernest Bate developed the Mountbatten Brailler, an electronic machine used to type braille on braille paper, giving it a number of additional features such as word processing, audio feedback and embossing. This version was improved in 2008 with a quiet writer that had an erase key.
In 2011 David S. Morgan produced the first SMART Brailler machine, with added text to speech function and allowed digital capture of data entered.
Braille reading
Braille is traditionally read in hardcopy form, such as with paper books written in braille, documents produced in paper braille (such as restaurant menus), and braille labels or public signage. It can also be read on a refreshable braille display either as a stand-alone electronic device or connected to a computer or smartphone. Refreshable braille displays convert what is visually shown on a computer or smartphone screen into braille through a series of pins that rise and fall to form braille symbols. Currently more than 1% of all printed books have been translated into hardcopy braille.
The fastest braille readers apply a light touch and read braille with two hands, although reading braille with one hand is also possible. Although the finger can read only one braille character at a time, the brain chunks braille at a higher level, processing words a digraph, root or suffix at a time. The processing largely takes place in the visual cortex.
Literacy
Children who are blind miss out on fundamental parts of early and advanced education if not provided with the necessary tools, such as access to educational materials in braille. Children who are blind or visually impaired can begin learning foundational braille skills from a very young age to become fluent braille readers as they get older. Sighted children are naturally exposed to written language on signs, on TV and in the books they see. Blind children require the same early exposure to literacy, through access to braille rich environments and opportunities to explore the world around them. Print-braille books, for example, present text in both print and braille and can be read by sighted parents to blind children (and vice versa), allowing blind children to develop an early love for reading even before formal reading instruction begins.
Adults who experience sight loss later in life or who did not have the opportunity to learn it when they were younger can also learn braille. In most cases, adults who learn braille were already literate in print before vision loss and so instruction focuses more on developing the tactile and motor skills needed to read braille.
While different countries publish statistics on how many readers in a given organization request braille, these numbers only provide a partial picture of braille literacy statistics. For example, this data does not survey the entire population of braille readers or always include readers who are no longer in the school system (adults) or readers who request electronic braille materials. Therefore, there are currently no reliable statistics on braille literacy rates, as described in a publication in the Journal of Visual Impairment and Blindness. Regardless of the precise percentage of braille readers, there is consensus that braille should be provided to all those who benefit from it.
Numerous factors influence access to braille literacy, including school budget constraints, technology advancements such as screen-reader software, access to qualified instruction, and different philosophical views over how blind children should be educated.
In the USA, a key turning point for braille literacy was the passage of the Rehabilitation Act of 1973, an act of Congress that moved thousands of children from specialized schools for the blind into mainstream public schools. Because only a small percentage of public schools could afford to train and hire braille-qualified teachers, braille literacy has declined since the law took effect. Braille literacy rates have improved slightly since the bill was passed, in part because of pressure from consumers and advocacy groups that has led 27 states to pass legislation mandating that children who are legally blind be given the opportunity to learn braille.
In 1998 there were 57,425 legally blind students registered in the United States, but only 10% (5,461) of them used braille as their primary reading medium.
Early Braille education is crucial to literacy for a blind or low-vision child. A study conducted in the state of Washington found that people who learned braille at an early age did just as well, if not better than their sighted peers in several areas, including vocabulary and comprehension. In the preliminary adult study, while evaluating the correlation between adult literacy skills and employment, it was found that 44% of the participants who had learned to read in braille were unemployed, compared to the 77% unemployment rate of those who had learned to read using print. Currently, among the estimated 85,000 blind adults in the United States, 90% of those who are braille-literate are employed. Among adults who do not know braille, only 33% are employed. Statistically, history has proven that braille reading proficiency provides an essential skill set that allows blind or low-vision children to compete with their sighted peers in a school environment and later in life as they enter the workforce.
Regardless of the specific percentage of braille readers, proponents point out the importance of increasing access to braille for all those who can benefit from it.
Braille transcription
Although it is possible to transcribe print by simply substituting the equivalent braille character for its printed equivalent, in English such a character-by-character transcription (known as uncontracted braille) is typically used by beginners or those who only engage in short reading tasks (such as reading household labels).
Braille characters are much larger than their printed equivalents, and the standard 11" by 11.5" (28 cm × 30 cm) page has room for only 25 lines of 43 characters. To reduce space and increase reading speed, most braille alphabets and orthographies use ligatures, abbreviations, and contractions. Virtually all English braille books in hardcopy (paper) format are transcribed in contracted braille: The Library of Congress's Instruction Manual for Braille Transcribing runs to over 300 pages, and braille transcribers must pass certification tests.
Uncontracted braille was previously known as grade 1 braille, and contracted braille was previously known as grade 2 braille. Uncontracted braille is a direct transliteration of print words (one-to-one correspondence); hence, the word "about" would contain all the same letters in uncontracted braille as it does in inkprint. Contracted braille includes short forms to save space; hence, for example, the letters "ab" when standing alone represent the word "about" in English contracted braille. In English, some braille users only learn uncontracted braille, particularly if braille is being used for shorter reading tasks such as reading household labels. However, those who plan to use braille for educational and employment purposes and longer reading texts often go on to contracted braille.
The system of contractions in English Braille begins with a set of 23 words contracted to single characters. Thus the word but is contracted to the single letter b, can to c, do to d, and so on. Even this simple rule creates issues requiring special cases; for example, d is, specifically, an abbreviation of the verb do; the noun do representing the note of the musical scale is a different word and must be spelled out.
Portions of words may be contracted, and many rules govern this process. For example, the character with dots 2-3-5 (the letter "f" lowered in the Braille cell) stands for "ff" when used in the middle of a word. At the beginning of a word, this same character stands for the word "to"; the character is written in braille with no space following it. (This contraction was removed in the Unified English Braille Code.) At the end of a word, the same character represents an exclamation point.
Some contractions are more similar than their print equivalents. For example, the contraction , meaning "letter", differs from , meaning "little", only by one dot in the second letter: little, letter. This causes greater confusion between the braille spellings of these words and can hinder the learning process of contracted braille.
The contraction rules take into account the linguistic structure of the word; thus, contractions are generally not to be used when their use would alter the usual braille form of a base word to which a prefix or suffix has been added. Some portions of the transcription rules are not fully codified and rely on the judgment of the transcriber. Thus, when the contraction rules permit the same word in more than one way, preference is given to "the contraction that more nearly approximates correct pronunciation".
"Grade 3 braille" is a variety of non-standardized systems that include many additional shorthand-like contractions. They are not used for publication, but by individuals for their personal convenience.
Braille translation software
When people produce braille, this is called braille transcription. When computer software produces braille, this is called a braille translator. Braille translation software exists to handle almost all of the common languages of the world, and many technical areas, such as mathematics (mathematical notation), for example WIMATS, music (musical notation), and tactile graphics.
Braille reading techniques
Since Braille is one of the few writing systems where tactile perception is used, as opposed to visual perception, a braille reader must develop new skills. One skill important for Braille readers is the ability to create smooth and even pressures when running one's fingers along the words. There are many different styles and techniques used for the understanding and development of braille, even though a study by B. F. Holland suggests that there is no specific technique that is superior to any other.
Another study by Lowenfield & Abel shows that braille can be read "the fastest and best... by students who read using the index fingers of both hands". Another important reading skill emphasized in this study is to finish reading the end of a line with the right hand and to find the beginning of the next line with the left hand simultaneously.
International uniformity
When Braille was first adapted to languages other than French, many schemes were adopted, including mapping the native alphabet to the alphabetical order of French – e.g. in English W, which was not in the French alphabet at the time, is mapped to braille X, X to Y, Y to Z, and Z to the first French-accented letter – or completely rearranging the alphabet such that common letters are represented by the simplest braille patterns. Consequently, mutual intelligibility was greatly hindered by this state of affairs. In 1878, the International Congress on Work for the Blind, held in Paris, proposed an international braille standard, where braille codes for different languages and scripts would be based, not on the order of a particular alphabet, but on phonetic correspondence and transliteration to Latin.
This unified braille has been applied to the languages of India and Africa, Arabic, Vietnamese, Hebrew, Russian, and Armenian, as well as nearly all Latin-script languages. In Greek, for example, γ (g) is written as Latin g, despite the fact that it has the alphabetic position of c; Hebrew ב (b), the second letter of the alphabet and cognate with the Latin letter b, is sometimes pronounced /b/ and sometimes /v/, and is written b or v accordingly; Russian ц (ts) is written as c, which is the usual letter for /ts/ in those Slavic languages that use the Latin alphabet; and Arabic ف (f) is written as f, despite being historically p and occurring in that part of the Arabic alphabet (between historic o and q).
Other braille conventions
Other systems for assigning values to braille patterns are also followed beside the simple mapping of the alphabetical order onto the original French order. Some braille alphabets start with unified braille, and then diverge significantly based on the phonology of the target languages, while others diverge even further.
In the various Chinese systems, traditional braille values are used for initial consonants and the simple vowels. In both Mandarin and Cantonese Braille, however, characters have different readings depending on whether they are placed in syllable-initial (onset) or syllable-final (rime) position. For instance, the cell for Latin k, , represents Cantonese k (g in Yale and other modern romanizations) when initial, but aak when final, while Latin j, , represents Cantonese initial j but final oei.
Novel systems of braille mapping include Korean, which adopts separate syllable-initial and syllable-final forms for its consonants, explicitly grouping braille cells into syllabic groups in the same way as hangul. Japanese, meanwhile, combines independent vowel dot patterns and modifier consonant dot patterns into a single braille cell – an abugida representation of each Japanese mora.
Uses
Braille is read by people who are blind, deafblind or who have low vision, and by both those born with a visual impairment and those who experience sight loss later in life. Braille may also be used by print impaired people, who although may be fully sighted, due to a physical disability are unable to read print. Even individuals with low vision will find that they benefit from braille, depending on level of vision or context (for example, when lighting or colour contrast is poor). Braille is used for both short and long reading tasks. Examples of short reading tasks include braille labels for identifying household items (or cards in a wallet), reading elevator buttons, accessing phone numbers, recipes, grocery lists and other personal notes. Examples of longer reading tasks include using braille to access educational materials, novels and magazines. People with access to a refreshable braille display can also use braille for reading email and ebooks, browsing the internet and accessing other electronic documents. It is also possible to adapt or purchase playing cards and board games in braille.
In India there are instances where the parliament acts have been published in braille, such as The Right to Information Act. Sylheti Braille is used in Northeast India.
In Canada, passenger safety information in braille and tactile seat row markers are required aboard planes, trains, large ferries, and interprovincial busses pursuant to the Canadian Transportation Agency's regulations.
In the United States, the Americans with Disabilities Act of 1990 requires various building signage to be in braille.
In the United Kingdom, it is required that medicines have the name of the medicine in Braille on the labeling.
Currency
The current series of Canadian banknotes has a tactile feature consisting of raised dots that indicate the denomination, allowing bills to be easily identified by blind or low vision people. It does not use standard braille numbers to identify the value. Instead, the number of full braille cells, which can be simply counted by both braille readers and non-braille readers alike, is an indicator of the value of the bill.
Mexican bank notes, Australian bank notes, Indian rupee notes, Israeli new shekel notes and Russian ruble notes also have special raised symbols to make them identifiable by persons who are blind or have low vision.
Euro coins were designed in cooperation with organisations representing blind people, and as a result they incorporate many features allowing them to be distinguished by touch alone. In addition, their visual appearance is designed to make them easy to tell apart for persons who cannot read the inscriptions on the coins. "A good design for the blind and partially sighted is a good design for everybody" was the principle behind the cooperation of the European Central Bank and the European Blind Union during the design phase of the first series Euro banknotes in the 1990s. As a result, the design of the first euro banknotes included several characteristics which aid both the blind and partially sighted to confidently use the notes.
Australia introduced the tactile feature onto their five-dollar banknote in 2016
In the United Kingdom, the front of the £10 polymer note (the side with raised print), has two clusters of raised dots in the top left hand corner, and the £20 note has three. This tactile feature helps blind and partially sighted people identify the value of the note.
In 2003 the US Mint introduced the commemorative Alabama State Quarter, which recognized State Daughter Helen Keller on the Obverse, including the name Helen Keller in both English script and Braille inscription. This appears to be the first known use of Braille on US Coin Currency, though not standard on all coins of this type.
Unicode
The Braille set was added to the Unicode Standard in version 3.0 (1999).
Most braille embossers and refreshable braille displays do not use the Unicode code points, but instead reuse the 8-bit code points that are assigned to standard ASCII for braille ASCII. (Thus, for simple material, the same bitstream may be interpreted equally as visual letter forms for sighted readers or their exact semantic equivalent in tactile patterns for blind readers. However some codes have quite different tactile versus visual interpretations and most are not even defined in Braille ASCII.)
Some embossers have proprietary control codes for 8-dot braille or for full graphics mode, where dots may be placed anywhere on the page without leaving any space between braille cells so that continuous lines can be drawn in diagrams, but these are rarely used and are not standard.
The Unicode standard encodes 6-dot and 8-dot braille glyphs according to their binary appearance, rather than following their assigned numeric order. Dot 1 corresponds to the least significant bit of the low byte of the Unicode scalar value, and dot 8 to the high bit of that byte.
The Unicode block for braille is U+2800 ... U+28FF. The mapping of patterns to characters etc. is language dependent: even for English for example, see American Braille and English Braille.
Observation
Every year on 4 January, World Braille Day is observed internationally to commemorate the birth of Louis Braille and to recognize his efforts. Although the event is not considered a public holiday, it has been recognized by the United Nations as an official day of celebration since 2019.
Braille devices
There is a variety of contemporary electronic devices that serve the needs of blind people that operate in Braille, such as refreshable braille displays and Braille e-book that use different technologies for transmitting graphic information of different types (pictures, maps, graphs, texts, etc.).
See also
("the Braille man of India")
List of binary codes
List of international common standards
Notes
References
External links
L'association Valentin Haüy (in French)
Acting for the autonomy of blind and partially sighted persons (Corporate brochure) (Microsoft Word file, in English)
Alternate Text Production Center of the California Community Colleges.
Braille Part 1 Text To Speech For The Visually Impaired YouTube
Braille information and advice – Sense UK
Braille at Omniglot
1824 introductions
Assistive technology
Augmentative and alternative communication
Character sets
Digital typography
French inventions
Latin-script representations
Writing systems introduced in the 19th century
|
https://en.wikipedia.org/wiki/Biochemistry
|
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems, in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology.
History
At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists.
The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister.
It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level.
Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression.
Starting materials: the chemical elements of life
Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts).
Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more.
Biomolecules
The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.
Carbohydrates
Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications.
The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose.
In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare.
Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance.
When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals.
Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).
Lipids
Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid.
Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain).
Most lipids have some polar character in addition to being largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.
Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, which are the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilisers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome).
Proteins
Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that made each amino acid different, and the properties of the side-chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.
Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain.
The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.
The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.
Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.
If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein.
A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release the ammonia into the water where it is quickly diluted. In general, mammals convert the ammonia into urea, via the urea cycle.
In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function.
Nucleic acids
Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group.
The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid (similar to a zipper). Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine and Thymine & Adenine and Uracil contains two hydrogen Bonds, while Hydrogen Bonds formed between cytosine and guanine are three in number.
Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.
Metabolism
Carbohydrates as energy source
Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.
Glycolysis (anaerobic)
Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.
Aerobic
In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.
Gluconeogenesis
In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate.
The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle.
Relationship to other "molecular-scale" biological sciences
Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields:
Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level.
Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies.
Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA.
Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules).
See also
Lists
Important publications in biochemistry (chemistry)
List of biochemistry topics
List of biochemists
List of biomolecules
See also
Fundamental Concepts And Processes In Biochemistry
Astrobiology
Biochemistry (journal)
Biological Chemistry (journal)
Biophysics
Chemical ecology
Computational biomodeling
Dedicated bio-based chemical
EC number
Hypothetical types of biochemistry
International Union of Biochemistry and Molecular Biology
Metabolome
Metabolomics
Molecular biology
Molecular medicine
Plant biochemistry
Proteolysis
Small molecule
Structural biology
TCA cycle
Notes
a. Fructose is not the only sugar found in fruits. Glucose and sucrose are also found in varying quantities in various fruits, and sometimes exceed the fructose present. For example, 32% of the edible portion of a date is glucose, compared with 24% fructose and 8% sucrose. However, peaches contain more sucrose (6.66%) than they do fructose (0.93%) or glucose (1.47%).
References
Cited literature
Further reading
Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999.
Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell
4th Edition, Routledge, March, 2002, hardcover, 1616 pp.
3rd Edition, Garland, 1994,
2nd Edition, Garland, 1989,
Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982.
External links
The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI.
SystemsX.ch – The Swiss Initiative in Systems Biology
Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook.
Biotechnology
Molecular biology
|
https://en.wikipedia.org/wiki/Biopolymer
|
Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The Polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs).
In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering.
Biopolymers versus synthetic polymers
A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or stochastic) structures. This fact leads to a molecular mass distribution that is missing in biopolymers. In fact, as their synthesis is controlled by a template-directed process in most in vivo systems, all biopolymers of a type (say one specific protein) are all alike: they all contain similar sequences and numbers of monomers and thus all have the same mass. This phenomenon is called monodispersity in contrast to the polydispersity encountered in synthetic polymers. As a result, biopolymers have a dispersity of 1.
Biopolymers versus biobased polymers
“Biopolymers” are usually not equal to “biobased polymers”. Biobased polymers are polymers chemically or biologically synthesized (fully or partially) from biomass monomers, such as polyesters (e.g., polyhydroxyalkanoates (PHAs) and polylactic acid (PLA)). In this respect, the only polymers that can be regarded as both biopolymers and biobased polymers are those that are biologically produced (by microbes) from biomass carbon sources (e.g., sugars and lipids), and examples of these include PHAs, bacterial cellulose, gellan gum, xanthan gum, and curdlan.
Conventions and nomenclature
Polypeptides
The convention for a polypeptide is to list its constituent amino acid residues as they occur from the amino terminus to the carboxylic acid terminus. The amino acid residues are always joined by peptide bonds. Protein, though used colloquially to refer to any polypeptide, refers to larger or fully functional forms and can consist of several polypeptide chains as well as single chains. Proteins can also be modified to include non-peptide components, such as saccharide chains and lipids.
Nucleic acids
The convention for a nucleic acid sequence is to list the nucleotides as they occur from the 5' end to the 3' end of the polymer chain, where 5' and 3' refer to the numbering of carbons around the ribose ring which participate in forming the phosphate diester linkages of the chain. Such a sequence is called the primary structure of the biopolymer.
Polysaccharides
Polysaccharides (sugar polymers) can be linear or branched and are typically joined with glycosidic bonds. The exact placement of the linkage can vary, and the orientation of the linking functional groups is also important, resulting in α- and β-glycosidic bonds with numbering definitive of the linking carbons' location in the ring. In addition, many saccharide units can undergo various chemical modifications, such as amination, and can even form parts of other molecules, such as glycoproteins.
Structural characterization
There are a number of biophysical techniques for determining sequence information. Protein sequence can be determined by Edman degradation, in which the N-terminal residues are hydrolyzed from the chain one at a time, derivatized, and then identified. Mass spectrometer techniques can also be used. Nucleic acid sequence can be determined using gel electrophoresis and capillary electrophoresis. Lastly, mechanical properties of these biopolymers can often be measured using optical tweezers or atomic force microscopy. Dual-polarization interferometry can be used to measure the conformational changes or self-assembly of these materials when stimulated by pH, temperature, ionic strength or other binding partners.
Common biopolymers
Collagen: Collagen is the primary structure of vertebrates and is the most abundant protein in mammals. Because of this, collagen is one of the most easily attainable biopolymers, and used for many research purposes. Because of its mechanical structure, collagen has high tensile strength and is a non-toxic, easily absorbable, biodegradable, and biocompatible material. Therefore, it has been used for many medical applications such as in treatment for tissue infection, drug delivery systems, and gene therapy.
Silk fibroin: Silk Fibroin (SF) is another protein rich biopolymer that can be obtained from different silkworm species, such as the mulberry worm Bombyx mori. In contrast to collagen, SF has a lower tensile strength but has strong adhesive properties due to its insoluble and fibrous protein composition. In recent studies, silk fibroin has been found to possess anticoagulation properties and platelet adhesion. Silk fibroin has been additionally found to support stem cell proliferation in vitro.
Gelatin: Gelatin is obtained from type I collagen consisting of cysteine, and produced by the partial hydrolysis of collagen from bones, tissues and skin of animals. There are two types of gelatin, Type A and Type B. Type A collagen is derived by acid hydrolysis of collagen and has 18.5% nitrogen. Type B is derived by alkaline hydrolysis containing 18% nitrogen and no amide groups. Elevated temperatures cause the gelatin to melts and exists as coils, whereas lower temperatures result in coil to helix transformation. Gelatin contains many functional groups like NH2, SH, and COOH which allow for gelatin to be modified using nanoparticles and biomolecules. Gelatin is an Extracellular Matrix protein which allows it to be applied for applications such as wound dressings, drug delivery and gene transfection.
Starch: Starch is an inexpensive biodegradable biopolymer and copious in supply. Nanofibers and microfibers can be added to the polymer matrix to increase the mechanical properties of starch improving elasticity and strength. Without the fibers, starch has poor mechanical properties due to its sensitivity to moisture. Starch being biodegradable and renewable is used for many applications including plastics and pharmaceutical tablets.
Cellulose: Cellulose is very structured with stacked chains that result in stability and strength. The strength and stability comes from the straighter shape of cellulose caused by glucose monomers joined together by glycogen bonds. The straight shape allows the molecules to pack closely. Cellulose is very common in application due to its abundant supply, its biocompatibility, and is environmentally friendly. Cellulose is used vastly in the form of nano-fibrils called nano-cellulose. Nano-cellulose presented at low concentrations produces a transparent gel material. This material can be used for biodegradable, homogeneous, dense films that are very useful in the biomedical field.
Alginate: Alginate is the most copious marine natural polymer derived from brown seaweed. Alginate biopolymer applications range from packaging, textile and food industry to biomedical and chemical engineering. The first ever application of alginate was in the form of wound dressing, where its gel-like and absorbent properties were discovered. When applied to wounds, alginate produces a protective gel layer that is optimal for healing and tissue regeneration, and keeps a stable temperature environment. Additionally, there have been developments with alginate as a drug delivery medium, as drug release rate can easily be manipulated due to a variety of alginate densities and fibrous composition.
Biopolymer applications
The applications of biopolymers can be categorized under two main fields, which differ due to their biomedical and industrial use.
Biomedical
Because one of the main purposes for biomedical engineering is to mimic body parts to sustain normal body functions, due to their biocompatible properties, biopolymers are used vastly for tissue engineering, medical devices and the pharmaceutical industry. Many biopolymers can be used for regenerative medicine, tissue engineering, drug delivery, and overall medical applications due to their mechanical properties. They provide characteristics like wound healing, and catalysis of bioactivity, and non-toxicity. Compared to synthetic polymers, which can present various disadvantages like immunogenic rejection and toxicity after degradation, many biopolymers are normally better with bodily integration as they also possess more complex structures, similar to the human body.
More specifically, polypeptides like collagen and silk, are biocompatible materials that are being used in ground-breaking research, as these are inexpensive and easily attainable materials. Gelatin polymer is often used on dressing wounds where it acts as an adhesive. Scaffolds and films with gelatin allow for the scaffolds to hold drugs and other nutrients that can be used to supply to a wound for healing.
As collagen is one of the more popular biopolymers used in biomedical science, here are some examples of their use:
Collagen based drug delivery systems: collagen films act like a barrier membrane and are used to treat tissue infections like infected corneal tissue or liver cancer. Collagen films have all been used for gene delivery carriers which can promote bone formation.
Collagen sponges: Collagen sponges are used as a dressing to treat burn victims and other serious wounds. Collagen based implants are used for cultured skin cells or drug carriers that are used for burn wounds and replacing skin.
Collagen as haemostat: When collagen interacts with platelets it causes a rapid coagulation of blood. This rapid coagulation produces a temporary framework so the fibrous stroma can be regenerated by host cells. Collagen based haemostat reduces blood loss in tissues and helps manage bleeding in organs such as the liver and spleen.
Chitosan is another popular biopolymer in biomedical research. Chitosan is derived from chitin, the main component in the exoskeleton of crustaceans and insects and the second most abundant biopolymer in the world. Chitosan has many excellent characteristics for biomedical science. Chitosan is biocompatible, it is highly bioactive, meaning it stimulates a beneficial response from the body, it can biodegrade which can eliminate a second surgery in implant applications, can form gels and films, and is selectively permeable. These properties allow for various biomedical applications of chitosan.
Chitosan as drug delivery: Chitosan is used mainly with drug targeting because it has potential to improve drug absorption and stability. In addition, chitosan conjugated with anticancer agents can also produce better anticancer effects by causing gradual release of free drug into cancerous tissue.
Chitosan as an anti-microbial agent: Chitosan is used to stop the growth of microorganisms. It performs antimicrobial functions in microorganisms like algae, fungi, bacteria, and gram-positive bacteria of different yeast species.
Chitosan composite for tissue engineering: Chitosan powder blended with alginate is used to form functional wound dressings. These dressings create a moist, biocompatible environment which aids in the healing process. This wound dressing is also biodegradable and has porous structures that allows cells to grow into the dressing. Furthermore, thiolated chitosans (see thiomers) are used for tissue engineering and wound healing, as these biopolymers are able to crosslink via disulfide bonds forming stable three-dimensional networks.
Industrial
Food: Biopolymers are being used in the food industry for things like packaging, edible encapsulation films and coating foods. Polylactic acid (PLA) is very common in the food industry due to is clear color and resistance to water. However, most polymers have a hydrophilic nature and start deteriorating when exposed to moisture. Biopolymers are also being used as edible films that encapsulate foods. These films can carry things like antioxidants, enzymes, probiotics, minerals, and vitamins. The food consumed encapsulated with the biopolymer film can supply these things to the body.
Packaging: The most common biopolymers used in packaging are polyhydroxyalkanoates (PHAs), polylactic acid (PLA), and starch. Starch and PLA are commercially available and biodegradable, making them a common choice for packaging. However, their barrier properties (either moisture-barrier or gas-barrier properties) and thermal properties are not ideal. Hydrophilic polymers are not water resistant and allow water to get through the packaging which can affect the contents of the package. Polyglycolic acid (PGA) is a biopolymer that has great barrier characteristics and is now being used to correct the barrier obstacles from PLA and starch.
Water purification: Chitosan has been used for water purification. It is used as a flocculant that only takes a few weeks or months rather than years to degrade in the environment. Chitosan purifies water by chelation. This is the process in which binding sites along the polymer chain bind with the metal ions in the water forming chelates. Chitosan has been shown to be an excellent candidate for use in storm and wastewater treatment.
As materials
Some biopolymers- such as PLA, naturally occurring zein, and poly-3-hydroxybutyrate can be used as plastics, replacing the need for polystyrene or polyethylene based plastics.
Some plastics are now referred to as being 'degradable', 'oxy-degradable' or 'UV-degradable'. This means that they break down when exposed to light or air, but these plastics are still primarily (as much as 98 per cent) oil-based and are not currently certified as 'biodegradable' under the European Union directive on Packaging and Packaging Waste (94/62/EC). Biopolymers will break down, and some are suitable for domestic composting.
Biopolymers (also called renewable polymers) are produced from biomass for use in the packaging industry. Biomass comes from crops such as sugar beet, potatoes, or wheat: when used to produce biopolymers, these are classified as non food crops. These can be converted in the following pathways:
Sugar beet > Glyconic acid > Polyglyconic acid
Starch > (fermentation) > Lactic acid > Polylactic acid (PLA)
Biomass > (fermentation) > Bioethanol > Ethene > Polyethylene
Many types of packaging can be made from biopolymers: food trays, blown starch pellets for shipping fragile goods, thin films for wrapping.
Environmental impacts
Biopolymers can be sustainable, carbon neutral and are always renewable, because they are made from plant or animal materials which can be grown indefinitely. Since these materials come from agricultural crops, their use could create a sustainable industry. In contrast, the feedstocks for polymers derived from petrochemicals will eventually deplete. In addition, biopolymers have the potential to cut carbon emissions and reduce CO2 quantities in the atmosphere: this is because the CO2 released when they degrade can be reabsorbed by crops grown to replace them: this makes them close to carbon neutral.
Almost all biopolymers are biodegradable in the natural environment: they are broken down into CO2 and water by microorganisms. These biodegradable biopolymers are also compostable: they can be put into an industrial composting process and will break down by 90% within six months. Biopolymers that do this can be marked with a 'compostable' symbol, under European Standard EN 13432 (2000). Packaging marked with this symbol can be put into industrial composting processes and will break down within six months or less. An example of a compostable polymer is PLA film under 20μm thick: films which are thicker than that do not qualify as compostable, even though they are "biodegradable". In Europe there is a home composting standard and associated logo that enables consumers to identify and dispose of packaging in their compost heap.
See also
Biomaterials
Bioplastic
Biopolymers & Cell (journal)
Condensation polymers
Condensed tannins
DNA sequence
Melanin
Non food crops
Phosphoramidite
Polymer chemistry
Sequence-controlled polymers
Sequencing
Small molecules
Worm-like chain
References
External links
NNFCC: The UK's National Centre for Biorenewable Energy, Fuels and Materials
Bioplastics Magazine
Biopolymer group
What’s Stopping Bioplastic?
Biomolecules
Polymers
Molecular biology
Molecular genetics
Biotechnology products
Bioplastics
Biomaterials
|
https://en.wikipedia.org/wiki/Bicarbonate
|
In inorganic chemistry, bicarbonate (IUPAC-recommended nomenclature: hydrogencarbonate) is an intermediate form in the deprotonation of carbonic acid. It is a polyatomic anion with the chemical formula .
Bicarbonate serves a crucial biochemical role in the physiological pH buffering system.
The term "bicarbonate" was coined in 1814 by the English chemist William Hyde Wollaston. The name lives on as a trivial name.
Chemical properties
The bicarbonate ion (hydrogencarbonate ion) is an anion with the empirical formula and a molecular mass of 61.01 daltons; it consists of one central carbon atom surrounded by three oxygen atoms in a trigonal planar arrangement, with a hydrogen atom attached to one of the oxygens. It is isoelectronic with nitric acid . The bicarbonate ion carries a negative one formal charge and is an amphiprotic species which has both acidic and basic properties. It is both the conjugate base of carbonic acid ; and the conjugate acid of , the carbonate ion, as shown by these equilibrium reactions:
+ 2 H2O + H2O + OH− H2CO3 + 2 OH−
H2CO3 + 2 H2O + H3O+ + H2O + 2 H3O+.
A bicarbonate salt forms when a positively charged ion attaches to the negatively charged oxygen atoms of the ion, forming an ionic compound. Many bicarbonates are soluble in water at standard temperature and pressure; in particular, sodium bicarbonate contributes to total dissolved solids, a common parameter for assessing water quality.
Physiological role
Bicarbonate () is a vital component of the pH buffering system of the human body (maintaining acid–base homeostasis). 70%–75% of CO2 in the body is converted into carbonic acid (H2CO3), which is the conjugate acid of and can quickly turn into it.
With carbonic acid as the central intermediate species, bicarbonate – in conjunction with water, hydrogen ions, and carbon dioxide – forms this buffering system, which is maintained at the volatile equilibrium required to provide prompt resistance to pH changes in both the acidic and basic directions. This is especially important for protecting tissues of the central nervous system, where pH changes too far outside of the normal range in either direction could prove disastrous (see acidosis or alkalosis). Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling.
Additionally, bicarbonate plays a key role in the digestive system. It raises the internal pH of the stomach, after highly acidic digestive juices have finished in their digestion of food. Bicarbonate also acts to regulate pH in the small intestine. It is released from the pancreas in response to the hormone secretin to neutralize the acidic chyme entering the duodenum from the stomach.
Bicarbonate in the environment
Bicarbonate is the dominant form of dissolved inorganic carbon in sea water, and in most fresh waters. As such it is an important sink in the carbon cycle.
Some plants like Chara utilize carbonate and produce calcium carbonate (CaCO3) as result of biological metabolism.
In freshwater ecology, strong photosynthetic activity by freshwater plants in daylight releases gaseous oxygen into the water and at the same time produces bicarbonate ions. These shift the pH upward until in certain circumstances the degree of alkalinity can become toxic to some organisms or can make other chemical constituents such as ammonia toxic. In darkness, when no photosynthesis occurs, respiration processes release carbon dioxide, and no new bicarbonate ions are produced, resulting in a rapid fall in pH.
The flow of bicarbonate ions from rocks weathered by the carbonic acid in rainwater is an important part of the carbon cycle.
Other uses
The most common salt of the bicarbonate ion is sodium bicarbonate, NaHCO3, which is commonly known as baking soda. When heated or exposed to an acid such as acetic acid (vinegar), sodium bicarbonate releases carbon dioxide. This is used as a leavening agent in baking.
Ammonium bicarbonate is used in digestive biscuit manufacture.
Diagnostics
In diagnostic medicine, the blood value of bicarbonate is one of several indicators of the state of acid–base physiology in the body. It is measured, along with chloride, potassium, and sodium, to assess electrolyte levels in an electrolyte panel test (which has Current Procedural Terminology, CPT, code 80051).
The parameter standard bicarbonate concentration (SBCe) is the bicarbonate concentration in the blood at a PaCO2 of , full oxygen saturation and 36 °C.
Bicarbonate compounds
Sodium bicarbonate
Potassium bicarbonate
Caesium bicarbonate
Magnesium bicarbonate
Calcium bicarbonate
Ammonium bicarbonate
Carbonic acid
See also
Carbon dioxide
Carbonate
Carbonic anhydrase
Hard water
Arterial blood gas test
References
External links
Amphoteric compounds
Anions
Bicarbonates
|
https://en.wikipedia.org/wiki/BQP
|
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.
A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3.
Definition
BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits , such that
For all , Qn takes n qubits as input and outputs 1 bit
For all x in L,
For all x not in L,
Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances.
Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input.
A complete problem for Promise-BQP
Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every problem in Promise-BQP reduces to it in polynomial time.
Here is an intuitive problem that is complete for efficient quantum computation, which stems directly from the definition of Promise-BQP. Note that for technical reasons, completeness proofs focus on the promise problem version of BQP. We show that the problem below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class having a trivial promise, for which no complete problems are known).
APPROX-QCIRCUIT-PROB problem
Given a description of a quantum circuit acting on qubits with gates, where is a polynomial in and each gate acts on one or two qubits, and two numbers , distinguish between the following two cases:
measuring the first qubit of the state yields with probability
measuring the first qubit of the state yields with probability
Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases.
Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB.
Proof.
Suppose we have an algorithm that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuit acting on qubits, and two numbers , distinguishes between the above two cases. We can solve any problem in BQP with this oracle, by setting .
For any , there exists family of quantum circuits such that for all , a state of qubits, if ; else if . Fix an input of qubits, and the corresponding quantum circuit . We can first construct a circuit such that . This can be done easily by hardwiring and apply a sequence of CNOT gates to flip the qubits. Then we can combine two circuits to get , and now . And finally, necessarily the results of is obtained by measuring several qubits and apply some (classical) logic gates to them. We can always defer the measurement and reroute the circuits so that by measuring the first qubit of , we get the output. This will be our circuit , and we decide the membership of in by running with . By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), so reduces to APPROX-QCIRCUIT-PROB.
APPROX-QCIRCUIT-PROB comes handy when we try to prove the relationships between some well-known complexity classes and BQP.
Relationship to other complexity classes
BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means BQPBQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time.
BQP contains P and BPP and is contained in AWPP, PP and PSPACE.
In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are:
As the problem of P ≟ PSPACE has not yet been solved, the proof of inequality between BQP and classes mentioned above is supposed to be difficult. The relation between BQP and NP is not known. In May 2018, computer scientists Ran Raz of Princeton University and Avishay Tal of Stanford University published a paper which showed that, relative to an oracle, BQP was not contained in PH. It can be proven that there exists an oracle A such that BQPA PHA. In an extremely informal sense, this can be thought of as giving PH and BQP an identical, but additional, capability and verifying that BQP with the oracle (BQPA) can do things PHA cannot. While an oracle separation has been proven, the fact that BQP is not contained in PH has not been proven. An oracle separation does not prove whether or not complexity classes are the same. The oracle separation gives intuition that BQP may not be contained in PH.
It has been suspected for many years that Fourier Sampling is a problem that exists within BQP, but not within the polynomial hierarchy. Recent conjectures have provided evidence that a similar problem, Fourier Checking, also exists in the class BQP without being contained in the polynomial hierarchy. This conjecture is especially notable because it suggests that problems existing in BQP could be classified as harder than NP-Complete problems. Paired with the fact that many practical BQP problems are suspected to exist outside of P (it is suspected and not verified because there is no proof that P ≠ NP), this illustrates the potential power of quantum computing in relation to classical computing.
Adding postselection to BQP results in the complexity class PostBQP which is equal to PP.
We will prove or discuss some of these results below.
BQP and EXP
We begin with an easier containment. To show that , it suffices to show that APPROX-QCIRCUIT-PROB is in EXP since APPROX-QCIRCUIT-PROB is BQP-complete.
Note that this algorithm also requires space to store the vectors and the matrices. We will show in the following section that we can improve upon the space complexity.
BQP and PSPACE
To prove , we first introduce a technique called the sum of histories.
Sum of Histories
Source:
Sum of histories is a technique introduced by physicist Richard Feynman for path integral formulation. We apply this technique to quantum computing to solve APPROX-QCIRCUIT-PROB.
Consider a quantum circuit , which consists of gates, , where each comes from a universal gate set and acts on at most two qubits.
To understand what the sum of histories is, we visualize the evolution of a quantum state given a quantum circuit as a tree. The root is the input , and each node in the tree has children, each representing a state in . The weight on a tree edge from a node in -th level representing a state to a node in -th level representing a state is , the amplitude of after applying on . The transition amplitude of a root-to-leaf path is the product of all the weights on the edges along the path. To get the probability of the final state being , we sum up the amplitudes of all root-to-leave paths that ends at a node representing .
More formally, for the quantum circuit , its sum over histories tree is a tree of depth , with one level for each gate in addition to the root, and with branching factor .
Notice in the sum over histories algorithm to compute some amplitude , only one history is stored at any point in the computation. Hence, the sum over histories algorithm uses space to compute for any since bits are needed to store the histories in addition to some workspace variables.
Therefore, in polynomial space, we may compute over all with the first qubit being , which is the probability that the first qubit is measured to be 1 by the end of the circuit.
Notice that compared with the simulation given for the proof that , our algorithm here takes far less space but far more time instead. In fact it takes time to calculate a single amplitude!
BQP and PP
A similar sum-over-histories argument can be used to show that .
P and BQP
We know , since every classical circuit can be simulated by a quantum circuit.
It is conjectured that BQP solves hard problems outside of P, specifically, problems in NP. The claim is indefinite because we don't know if P=NP, so we don't know if those problems are actually in P. Below are some evidence of the conjecture:
Integer factorization (see Shor's algorithm)
Discrete logarithm
Simulation of quantum systems (see universal quantum simulator)
Approximating the Jones polynomial at certain roots of unity
Harrow-Hassidim-Lloyd (HHL) algorithm
See also
Hidden subgroup problem
Polynomial hierarchy (PH)
Quantum complexity theory
QMA, the quantum equivalent to NP.
QIP, the quantum equivalent to IP.
References
External links
Complexity Zoo link to BQP
Probabilistic complexity classes
Quantum complexity theory
Quantum computing
|
https://en.wikipedia.org/wiki/Brainfuck
|
Brainfuck is an esoteric programming language created in 1993 by Urban Müller.
Notable for its extreme minimalism, the language consists of only eight simple commands, a data pointer and an instruction pointer. While it is fully Turing complete, it is not intended for practical use, but to challenge and amuse programmers. Brainfuck requires one to break commands into microscopic steps.
The language's name is a reference to the slang term brainfuck, which refers to things so complicated or unusual that they exceed the limits of one's understanding, as it was not meant or made for designing actual software but to challenge the boundaries of computer programming.
History
Müller designed Brainfuck with the goal of implementing the smallest possible compiler, inspired by the 1024-byte compiler for the FALSE programming language. Müller's original compiler was implemented in machine language and compiled to a binary with a size of 296 bytes. He uploaded the first Brainfuck compiler to Aminet in 1993. The program came with a "Readme" file, which briefly described the language, and challenged the reader "Who can program anything useful with it? :)". Müller also included an interpreter and some examples. A second version of the compiler used only 240 bytes.
P′′
Except for its two I/O commands, Brainfuck is a minor variation of the formal programming language P′′ created by Corrado Böhm in 1964, which is explicitly based on the Turing machine. In fact, using six symbols equivalent to the respective Brainfuck commands +, -, <, >, [, ], Böhm provided an explicit program for each of the basic functions that together serve to compute any computable function. So the first "Brainfuck" programs appear in Böhm's 1964 paper – and they were sufficient to prove Turing completeness.
Language design
The language consists of eight commands. A brainfuck program is a sequence of these commands, possibly interspersed with other characters (which are ignored). The commands are executed sequentially, with some exceptions: an instruction pointer begins at the first command, and each command it points to is executed, after which it normally moves forward to the next command. The program terminates when the instruction pointer moves past the last command.
The brainfuck language uses a simple machine model consisting of the program and instruction pointer, as well as a one-dimensional array of at least 30,000 byte cells initialized to zero; a movable data pointer (initialized to point to the leftmost byte of the array); and two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding).
The eight language commands each consist of a single character:
[ and ] match as parentheses usually do: each [ matches exactly one ] and vice versa, the [ comes first, and there can be no unmatched [ or ] between the two.
As the name suggests, Brainfuck programs tend to be difficult to comprehend. This is partly because any mildly complex task requires a long sequence of commands and partly because the program's text gives no direct indications of the program's state. These, as well as Brainfuck's inefficiency and its limited input/output capabilities, are some of the reasons it is not used for serious programming. Nonetheless, like any Turing complete language, Brainfuck is theoretically capable of computing any computable function or simulating any other computational model, if given access to an unlimited amount of memory. A variety of Brainfuck programs have been written. Although Brainfuck programs, especially complicated ones, are difficult to write, it is quite trivial to write an interpreter for Brainfuck in a more typical language such as C due to its simplicity. There even exist Brainfuck interpreters written in the Brainfuck language itself.
Brainfuck is an example of a so-called Turing tarpit: It can be used to write any program, but it is not practical to do so, because Brainfuck provides so little abstraction that the programs get very long or complicated.
Examples
Adding two values
As a first, simple example, the following code snippet will add the current cell's value to the next cell: Each time the loop is executed, the current cell is decremented, the data pointer moves to the right, that next cell is incremented, and the data pointer moves left again. This sequence is repeated until the starting cell is 0.
[->+<]
This can be incorporated into a simple addition program as follows:
++ Cell c0 = 2
> +++++ Cell c1 = 5
[ Start your loops with your cell pointer on the loop counter (c1 in our case)
< + Add 1 to c0
> - Subtract 1 from c1
] End your loops with the cell pointer on the loop counter
At this point our program has added 5 to 2 leaving 7 in c0 and 0 in c1
but we cannot output this value to the terminal since it is not ASCII encoded
To display the ASCII character "7" we must add 48 to the value 7
We use a loop to compute 48 = 6 * 8
++++ ++++ c1 = 8 and this will be our loop counter again
[
< +++ +++ Add 6 to c0
> - Subtract 1 from c1
]
< . Print out c0 which has the value 55 which translates to "7"!
Hello World!
The following program prints "Hello World!" and a newline to the screen:
[ This program prints "Hello World!" and a newline to the screen, its
length is 106 active command characters. [It is not the shortest.]
This loop is an "initial comment loop", a simple way of adding a comment
to a BF program such that you don't have to worry about any command
characters. Any ".", ",", "+", "-", "<" and ">" characters are simply
ignored, the "[" and "]" characters just have to be balanced. This
loop and the commands it contains are ignored because the current cell
defaults to a value of 0; the 0 value causes this loop to be skipped.
]
++++++++ Set Cell #0 to 8
[
>++++ Add 4 to Cell #1; this will always set Cell #1 to 4
[ as the cell will be cleared by the loop
>++ Add 2 to Cell #2
>+++ Add 3 to Cell #3
>+++ Add 3 to Cell #4
>+ Add 1 to Cell #5
<<<<- Decrement the loop counter in Cell #1
] Loop until Cell #1 is zero; number of iterations is 4
>+ Add 1 to Cell #2
>+ Add 1 to Cell #3
>- Subtract 1 from Cell #4
>>+ Add 1 to Cell #6
[<] Move back to the first zero cell you find; this will
be Cell #1 which was cleared by the previous loop
<- Decrement the loop Counter in Cell #0
] Loop until Cell #0 is zero; number of iterations is 8
The result of this is:
Cell no : 0 1 2 3 4 5 6
Contents: 0 0 72 104 88 32 8
Pointer : ^
>>. Cell #2 has value 72 which is 'H'
>---. Subtract 3 from Cell #3 to get 101 which is 'e'
+++++++..+++. Likewise for 'llo' from Cell #3
>>. Cell #5 is 32 for the space
<-. Subtract 1 from Cell #4 for 87 to give a 'W'
<. Cell #3 was set to 'o' from the end of 'Hello'
+++.------.--------. Cell #3 for 'rl' and 'd'
>>+. Add 1 to Cell #5 gives us an exclamation point
>++. And finally a newline from Cell #6
For "readability", this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands +-<>[],. so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.
Another example of a code golfed version that prints Hello, World!:
+[-->-[>>+>-----<<]<--<---]>-.>>>+.>>..+++[.>]<<<<.+++.------.<<-.>>>>+.
ROT13
This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65–77) to N-Z (78-90), and vice versa. Also it must map a-m (97-109) to n-z (110-122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates.
-,+[ Read first character and start outer character reading loop
-[ Skip forward if character is 0
>>++++[>++++++++<-] Set up divisor (32) for division loop
(MEMORY LAYOUT: dividend copy remainder divisor quotient zero zero)
<+<-[ Set up dividend (x minus 1) and enter division loop
>+>+>-[>>>] Increase copy and remainder / reduce divisor / Normal case: skip forward
<[[>+<-]>>+>] Special case: move remainder back to divisor and increase quotient
<<<<<- Decrement dividend
] End division loop
]>>>[-]+ End skip loop; zero former divisor and reuse space for a flag
>--[-[<->+++[-]]]<[ Zero that flag unless quotient was 2 or 3; zero quotient; check flag
++++++++++++<[ If flag then set up divisor (13) for second division loop
(MEMORY LAYOUT: zero copy dividend divisor remainder quotient zero zero)
>-[>+>>] Reduce divisor; Normal case: increase remainder
>[+[<+>-]>+>>] Special case: increase remainder / move it back to divisor / increase quotient
<<<<<- Decrease dividend
] End division loop
>>[<+>-] Add remainder back to divisor to get a useful 13
>[ Skip forward if quotient was 0
-[ Decrement quotient and skip forward if quotient was 1
-<<[-]>> Zero quotient and divisor if quotient was 2
]<<[<<->>-]>> Zero divisor and subtract 13 from copy if quotient was 1
]<<[<<+>>-] Zero divisor and add 13 to copy if quotient was 0
] End outer skip loop (jump to here if ((character minus 1)/32) was not 2 or 3)
<[-] Clear remainder from first division if second division was skipped
<.[-] Output ROT13ed character from copy and clear it
<-,+ Read next character
] End character reading loop
See also
JSFuck – an esoteric JavaScript programming language with a very limited set of characters
Notes
References
Non-English-based programming languages
Esoteric programming languages
Programming languages created in 1993
|
https://en.wikipedia.org/wiki/Bioleaching
|
Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt.
Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite.
The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper.
Process
Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal.
Pyrite leaching (FeS2):
In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+):
(1) spontaneous
The ferrous ion is then oxidized by bacteria using oxygen:
(2) (iron oxidizers)
Thiosulfate is also oxidized by bacteria to give sulfate:
(3) (sulfur oxidizers)
The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction:
(4)
The net products of the reaction are soluble ferrous sulfate and sulfuric acid.
The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant.
The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution.
Chalcopyrite leaching:
(1) spontaneous
(2) (iron oxidizers)
(3) (sulfur oxidizers)
net reaction:
(4)
In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by Acidithiobacillus spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals.
Further processing
The dissolved copper (Cu2+) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene:
Cu2+(aq) + 2LH(organic) → CuL2(organic) + 2H+(aq)
The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution.
Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there.
The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron:
Cu2+(aq) + Fe(s) → Cu(s) + Fe2+(aq)
The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons).
Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal.
With fungi
Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and fly ash from municipal waste incineration. Experiments have shown that two fungal strains (Aspergillus niger, Penicillium simplicissimum) were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. Aspergillus niger can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal.
Feasibility
Economic feasibility
Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants. And low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished.
Bioleaching can be used to extract metals from low concentration ores such as gold that are too poor for other technologies. It can be used to partially replace the extensive crushing and grinding that translates to prohibitive cost and energy consumption in a conventional process. Because the lower cost of bacterial leaching outweighs the time it takes to extract the metal.
High concentration ores, such as copper, are more economical to smelt rather bioleach due to the slow speed of the bacterial leaching process compared to smelting. The slow speed of bioleaching introduces a significant delay in cash flow for new mines. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable.
Economically it is also very expensive and many companies once started can not keep up with the demand and end up in debt.
In space
In 2020 scientists showed, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basaltic rocks via bioleaching in space.
Environmental impact
The process is more environmentally friendly than traditional extraction methods. For the company this can translate into profit, since the necessary limiting of sulfur dioxide emissions during smelting is expensive. Less landscape damage occurs, since the bacteria involved grow naturally, and the mine and surrounding area can be left relatively untouched. As the bacteria breed in the conditions of the mine, they are easily cultivated and recycled.
Toxic chemicals are sometimes produced in the process. Sulfuric acid and H+ ions that have been formed can leak into the ground and surface water turning it acidic, causing environmental damage. Heavy ions such as iron, zinc, and arsenic leak during acid mine drainage. When the pH of this solution rises, as a result of dilution by fresh water, these ions precipitate, forming "Yellow Boy" pollution. For these reasons, a setup of bioleaching must be carefully planned, since the process can lead to a biosafety failure. Unlike other methods, once started, bioheap leaching cannot be quickly stopped, because leaching would still continue with rainwater and natural bacteria. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous.
See also
Phytomining
References
Further reading
T. A. Fowler and F. K. Crundwell – "Leaching of zinc sulfide with Thiobacillus ferrooxidans"
Brandl H. (2001) "Microbial leaching of metals". In: Rehm H. J. (ed.) Biotechnology, Vol. 10. Wiley-VCH, Weinheim, pp. 191–224
Biotechnology
Economic geology
Metallurgical processes
Applied microbiology
|
https://en.wikipedia.org/wiki/Bacteriophage
|
A bacteriophage (), also known informally as a phage (), is a virus that infects and replicates within bacteria and archaea. The term was derived from "bacteria" and the Greek φαγεῖν (), meaning "to devour". Bacteriophages are composed of proteins that encapsulate a DNA or RNA genome, and may have structures that are either simple or elaborate. Their genomes may encode as few as four genes (e.g. MS2) and as many as hundreds of genes. Phages replicate within the bacterium following the injection of their genome into its cytoplasm.
Bacteriophages are among the most common and diverse entities in the biosphere. Bacteriophages are ubiquitous viruses, found wherever bacteria exist. It is estimated there are more than 1031 bacteriophages on the planet, more than every other organism on Earth, including bacteria, combined. Viruses are the most abundant biological entity in the water column of the world's oceans, and the second largest component of biomass after prokaryotes, where up to 9x108 virions per millilitre have been found in microbial mats at the surface, and up to 70% of marine bacteria may be infected by phages.
Phages have been used since the late 20th century as an alternative to antibiotics in the former Soviet Union and Central Europe, as well as in France. They are seen as a possible therapy against multi-drug-resistant strains of many bacteria (see phage therapy).
Phages are known to interact with the immune system both indirectly via bacterial expression of phage-encoded proteins and directly by influencing innate immunity and bacterial clearance. Phage–host interactions are becoming increasingly important areas of research.
Classification
Bacteriophages occur abundantly in the biosphere, with different genomes and lifestyles. Phages are classified by the International Committee on Taxonomy of Viruses (ICTV) according to morphology and nucleic acid.
It has been suggested that members of Picobirnaviridae infect bacteria, but not mammals.
There are also many unassigned genera of the class Leviviricetes: Chimpavirus, Hohglivirus, Mahrahvirus, Meihzavirus, Nicedsevirus, Sculuvirus, Skrubnovirus, Tetipavirus and Winunavirus containing linear ssRNA genomes and the unassigned genus Lilyvirus of the order Caudovirales containing a linear dsDNA genome.
History
In 1896, Ernest Hanbury Hankin reported that something in the waters of the Ganges and Yamuna rivers in India had a marked antibacterial action against cholera and it could pass through a very fine porcelain filter. In 1915, British bacteriologist Frederick Twort, superintendent of the Brown Institution of London, discovered a small agent that infected and killed bacteria. He believed the agent must be one of the following:
a stage in the life cycle of the bacteria
an enzyme produced by the bacteria themselves, or
a virus that grew on and destroyed the bacteria
Twort's research was interrupted by the onset of World War I, as well as a shortage of funding and the discoveries of antibiotics.
Independently, French-Canadian microbiologist Félix d'Hérelle, working at the Pasteur Institute in Paris, announced on 3 September 1917 that he had discovered "an invisible, antagonistic microbe of the dysentery bacillus". For d'Hérelle, there was no question as to the nature of his discovery: "In a flash I had understood: what caused my clear spots was in fact an invisible microbe... a virus parasitic on bacteria." D'Hérelle called the virus a bacteriophage, a bacteria-eater (from the Greek , meaning "to devour"). He also recorded a dramatic account of a man suffering from dysentery who was restored to good health by the bacteriophages. It was d'Hérelle who conducted much research into bacteriophages and introduced the concept of phage therapy. In 1919, in Paris, France, d'Hérelle conducted the first clinical application of a bacteriophage, with the first reported use in the United States being in 1922.
Nobel prizes awarded for phage research
In 1969, Max Delbrück, Alfred Hershey, and Salvador Luria were awarded the Nobel Prize in Physiology or Medicine for their discoveries of the replication of viruses and their genetic structure. Specifically the work of Hershey, as contributor to the Hershey–Chase experiment in 1952, provided convincing evidence that DNA, not protein, was the genetic material of life. Delbrück and Luria carried out the Luria–Delbrück experiment which demonstrated statistically that mutations in bacteria occur randomly and thus follow Darwinian rather than Lamarckian principles.
Uses
Phage therapy
Phages were discovered to be antibacterial agents and were used in the former Soviet Republic of Georgia (pioneered there by Giorgi Eliava with help from the co-discoverer of bacteriophages, Félix d'Hérelle) during the 1920s and 1930s for treating bacterial infections. They had widespread use, including treatment of soldiers in the Red Army. However, they were abandoned for general use in the West for several reasons:
Antibiotics were discovered and marketed widely. They were easier to make, store, and prescribe.
Medical trials of phages were carried out, but a basic lack of understanding of phages raised questions about the validity of these trials.
Publication of research in the Soviet Union was mainly in the Russian or Georgian languages and for many years was not followed internationally.
The use of phages has continued since the end of the Cold War in Russia, Georgia, and elsewhere in Central and Eastern Europe. The first regulated, randomized, double-blind clinical trial was reported in the Journal of Wound Care in June 2009, which evaluated the safety and efficacy of a bacteriophage cocktail to treat infected venous ulcers of the leg in human patients. The FDA approved the study as a Phase I clinical trial. The study's results demonstrated the safety of therapeutic application of bacteriophages, but did not show efficacy. The authors explained that the use of certain chemicals that are part of standard wound care (e.g. lactoferrin or silver) may have interfered with bacteriophage viability. Shortly after that, another controlled clinical trial in Western Europe (treatment of ear infections caused by Pseudomonas aeruginosa) was reported in the journal Clinical Otolaryngology in August 2009. The study concludes that bacteriophage preparations were safe and effective for treatment of chronic ear infections in humans. Additionally, there have been numerous animal and other experimental clinical trials evaluating the efficacy of bacteriophages for various diseases, such as infected burns and wounds, and cystic fibrosis-associated lung infections, among others. On the other hand, phages of Inoviridae have been shown to complicate biofilms involved in pneumonia and cystic fibrosis and to shelter the bacteria from drugs meant to eradicate disease, thus promoting persistent infection.
Meanwhile, bacteriophage researchers have been developing engineered viruses to overcome antibiotic resistance, and engineering the phage genes responsible for coding enzymes that degrade the biofilm matrix, phage structural proteins, and the enzymes responsible for lysis of the bacterial cell wall. There have been results showing that T4 phages that are small in size and short-tailed can be helpful in detecting E. coli in the human body.
Therapeutic efficacy of a phage cocktail was evaluated in a mice model with nasal infection of multidrug-resistant (MDR) A. baumannii. Mice treated with the phage cocktail showed a 2.3-fold higher survival rate compared to those untreated at seven days post-infection. In 2017, a patient with a pancreas compromised by MDR A. baumannii was put on several antibiotics; despite this, the patient's health continued to deteriorate during a four-month period. Without effective antibiotics, the patient was subjected to phage therapy using a phage cocktail containing nine different phages that had been demonstrated to be effective against MDR A. baumannii. Once on this therapy the patient's downward clinical trajectory reversed, and returned to health.
D'Herelle "quickly learned that bacteriophages are found wherever bacteria thrive: in sewers, in rivers that catch waste runoff from pipes, and in the stools of convalescent patients." This includes rivers traditionally thought to have healing powers, including India's Ganges River.
Other
Food industry – Phages have increasingly been used to safen food products and to forestall spoilage bacteria. Since 2006, the United States Food and Drug Administration (FDA) and United States Department of Agriculture (USDA) have approved several bacteriophage products. LMP-102 (Intralytix) was approved for treating ready-to-eat (RTE) poultry and meat products. In that same year, the FDA approved LISTEX (developed and produced by Micreos) using bacteriophages on cheese to kill Listeria monocytogenes bacteria, in order to give them generally recognized as safe (GRAS) status. In July 2007, the same bacteriophage were approved for use on all food products. In 2011 USDA confirmed that LISTEX is a clean label processing aid and is included in USDA. Research in the field of food safety is continuing to see if lytic phages are a viable option to control other food-borne pathogens in various food products.
Diagnostics – In 2011, the FDA cleared the first bacteriophage-based product for in vitro diagnostic use. The KeyPath MRSA/MSSA Blood Culture Test uses a cocktail of bacteriophage to detect Staphylococcus aureus in positive blood cultures and determine methicillin resistance or susceptibility. The test returns results in about five hours, compared to two to three days for standard microbial identification and susceptibility test methods. It was the first accelerated antibiotic-susceptibility test approved by the FDA.
Counteracting bioweapons and toxins – Government agencies in the West have for several years been looking to Georgia and the former Soviet Union for help with exploiting phages for counteracting bioweapons and toxins, such as anthrax and botulism. Developments are continuing among research groups in the U.S. Other uses include spray application in horticulture for protecting plants and vegetable produce from decay and the spread of bacterial disease. Other applications for bacteriophages are as biocides for environmental surfaces, e.g., in hospitals, and as preventative treatments for catheters and medical devices before use in clinical settings. The technology for phages to be applied to dry surfaces, e.g., uniforms, curtains, or even sutures for surgery now exists. Clinical trials reported in Clinical Otolaryngology show success in veterinary treatment of pet dogs with otitis.
The SEPTIC bacterium sensing and identification method uses the ion emission and its dynamics during phage infection and offers high specificity and speed for detection.
Phage display is a different use of phages involving a library of phages with a variable peptide linked to a surface protein. Each phage genome encodes the variant of the protein displayed on its surface (hence the name), providing a link between the peptide variant and its encoding gene. Variant phages from the library may be selected through their binding affinity to an immobilized molecule (e.g., botulism toxin) to neutralize it. The bound, selected phages can be multiplied by reinfecting a susceptible bacterial strain, thus allowing them to retrieve the peptides encoded in them for further study.
Antimicrobial drug discovery – Phage proteins often have antimicrobial activity and may serve as leads for peptidomimetics, i.e. drugs that mimic peptides. Phage-ligand technology makes use of phage proteins for various applications, such as binding of bacteria and bacterial components (e.g. endotoxin) and lysis of bacteria.
Basic research – Bacteriophages are important model organisms for studying principles of evolution and ecology.
Detriments
Dairy industry
Bacteriophages present in the environment can cause cheese to not ferment. In order to avoid this, mixed-strain starter cultures and culture rotation regimes can be used. Genetic engineering of culture microbes – especially Lactococcus lactis and Streptococcus thermophilus – have been studied for genetic analysis and modification to improve phage resistance. This has especially focused on plasmid and recombinant chromosomal modifications.
Some research has focused on the potential of bacteriophages as antimicrobial against foodborne pathogens and biofilm formation within the dairy industry. As the spread of antibiotic resistance is a main concern within the dairy industry, phages can serve as a promising alternative.
Replication
The life cycle of bacteriophages tends to be either a lytic cycle or a lysogenic cycle. In addition, some phages display pseudolysogenic behaviors.
With lytic phages such as the T4 phage, bacterial cells are broken open (lysed) and destroyed after immediate replication of the virion. As soon as the cell is destroyed, the phage progeny can find new hosts to infect. Lytic phages are more suitable for phage therapy. Some lytic phages undergo a phenomenon known as lysis inhibition, where completed phage progeny will not immediately lyse out of the cell if extracellular phage concentrations are high. This mechanism is not identical to that of the temperate phage going dormant and usually is temporary.
In contrast, the lysogenic cycle does not result in immediate lysing of the host cell. Those phages able to undergo lysogeny are known as temperate phages. Their viral genome will integrate with host DNA and replicate along with it, relatively harmlessly, or may even become established as a plasmid. The virus remains dormant until host conditions deteriorate, perhaps due to depletion of nutrients, then, the endogenous phages (known as prophages) become active. At this point they initiate the reproductive cycle, resulting in lysis of the host cell. As the lysogenic cycle allows the host cell to continue to survive and reproduce, the virus is replicated in all offspring of the cell. An example of a bacteriophage known to follow the lysogenic cycle and the lytic cycle is the phage lambda of E. coli.
Sometimes prophages may provide benefits to the host bacterium while they are dormant by adding new functions to the bacterial genome, in a phenomenon called lysogenic conversion. Examples are the conversion of harmless strains of Corynebacterium diphtheriae or Vibrio cholerae by bacteriophages to highly virulent ones that cause diphtheria or cholera, respectively. Strategies to combat certain bacterial infections by targeting these toxin-encoding prophages have been proposed.
Attachment and penetration
Bacterial cells are protected by a cell wall of polysaccharides, which are important virulence factors protecting bacterial cells against both immune host defenses and antibiotics.
Host growth conditions also influence the ability of the phage to attach and invade them. As phage virions do not move independently, they must rely on random encounters with the correct receptors when in solution, such as blood, lymphatic circulation, irrigation, soil water, etc.
Myovirus bacteriophages use a hypodermic syringe-like motion to inject their genetic material into the cell. After contacting the appropriate receptor, the tail fibers flex to bring the base plate closer to the surface of the cell. This is known as reversible binding. Once attached completely, irreversible binding is initiated and the tail contracts, possibly with the help of ATP present in the tail, injecting genetic material through the bacterial membrane. The injection is accomplished through a sort of bending motion in the shaft by going to the side, contracting closer to the cell and pushing back up. Podoviruses lack an elongated tail sheath like that of a myovirus, so instead, they use their small, tooth-like tail fibers enzymatically to degrade a portion of the cell membrane before inserting their genetic material.
Synthesis of proteins and nucleic acid
Within minutes, bacterial ribosomes start translating viral mRNA into protein. For RNA-based phages, RNA replicase is synthesized early in the process. Proteins modify the bacterial RNA polymerase so it preferentially transcribes viral mRNA. The host's normal synthesis of proteins and nucleic acids is disrupted, and it is forced to manufacture viral products instead. These products go on to become part of new virions within the cell, helper proteins that contribute to the assemblage of new virions, or proteins involved in cell lysis. In 1972, Walter Fiers (University of Ghent, Belgium) was the first to establish the complete nucleotide sequence of a gene and in 1976, of the viral genome of bacteriophage MS2. Some dsDNA bacteriophages encode ribosomal proteins, which are thought to modulate protein translation during phage infection.
Virion assembly
In the case of the T4 phage, the construction of new virus particles involves the assistance of helper proteins that act catalytically during phage morphogenesis. The base plates are assembled first, with the tails being built upon them afterward. The head capsids, constructed separately, will spontaneously assemble with the tails. During assembly of the phage T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. The DNA is packed efficiently within the heads. The whole process takes about 15 minutes.
Release of virions
Phages may be released via cell lysis, by extrusion, or, in a few cases, by budding. Lysis, by tailed phages, is achieved by an enzyme called endolysin, which attacks and breaks down the cell wall peptidoglycan. An altogether different phage type, the filamentous phage, makes the host cell continually secrete new virus particles. Released virions are described as free, and, unless defective, are capable of infecting a new bacterium. Budding is associated with certain Mycoplasma phages. In contrast to virion release, phages displaying a lysogenic cycle do not kill the host and instead become long-term residents as prophages.
Communication
Research in 2017 revealed that the bacteriophage Φ3T makes a short viral protein that signals other bacteriophages to lie dormant instead of killing the host bacterium. Arbitrium is the name given to this protein by the researchers who discovered it.
Genome structure
Given the millions of different phages in the environment, phage genomes come in a variety of forms and sizes. RNA phages such as MS2 have the smallest genomes, with only a few kilobases. However, some DNA phages such as T4 may have large genomes with hundreds of genes; the size and shape of the capsid varies along with the size of the genome. The largest bacteriophage genomes reach a size of 735 kb.Bacteriophage genomes can be highly mosaic, i.e. the genome of many phage species appear to be composed of numerous individual modules. These modules may be found in other phage species in different arrangements. Mycobacteriophages, bacteriophages with mycobacterial hosts, have provided excellent examples of this mosaicism. In these mycobacteriophages, genetic assortment may be the result of repeated instances of site-specific recombination and illegitimate recombination (the result of phage genome acquisition of bacterial host genetic sequences). Evolutionary mechanisms shaping the genomes of bacterial viruses vary between different families and depend upon the type of the nucleic acid, characteristics of the virion structure, as well as the mode of the viral life cycle.
Some marine roseobacter phages contain deoxyuridine (dU) instead of deoxythymidine (dT) in their genomic DNA. There is some evidence that this unusual component is a mechanism to evade bacterial defense mechanisms such as restriction endonucleases and CRISPR/Cas systems which evolved to recognize and cleave sequences within invading phages, thereby inactivating them. Other phages have long been known to use unusual nucleotides. In 1963, Takahashi and Marmur identified a Bacillus phage that has dU substituting dT in its genome, and in 1977, Kirnos et al. identified a cyanophage containing 2-aminoadenine (Z) instead of adenine (A).
Systems biology
The field of systems biology investigates the complex networks of interactions within an organism, usually using computational tools and modeling. For example, a phage genome that enters into a bacterial host cell may express hundreds of phage proteins which will affect the expression of numerous host genes or the host's metabolism. All of these complex interactions can be described and simulated in computer models.
For instance, infection of Pseudomonas aeruginosa by the temperate phage PaP3 changed the expression of 38% (2160/5633) of its host's genes. Many of these effects are probably indirect, hence the challenge becomes to identify the direct interactions among bacteria and phage.
Several attempts have been made to map protein–protein interactions among phage and their host. For instance, bacteriophage lambda was found to interact with its host, E. coli, by dozens of interactions. Again, the significance of many of these interactions remains unclear, but these studies suggest that there most likely are several key interactions and many indirect interactions whose role remains uncharacterized.
Host resistance
Bacteriophages are a major threat to bacteria and prokaryotes have evolved numerous mechanisms to block infection or to block the replication of bacteriophages within host cells. The CRISPR system is one such mechanism as are retrons and the anti-toxin system encoded by them. The Thoeris defense system is known to deploy a unique strategy for bacterial antiphage resistance via NAD+ degradation.
Bacteriophage–host symbiosis
Temperate phages are bacteriophages that integrate their genetic material into the host as extrachromosomal episomes or as a prophage during a lysogenic cycle. Some temperate phages can confer fitness advantages to their host in numerous ways, including giving antibiotic resistance through the transfer or introduction of antibiotic resistance genes (ARGs), protecting hosts from phagocytosis, protecting hosts from secondary infection through superinfection exclusion, enhancing host pathogenicity, or enhancing bacterial metabolism or growth. Bacteriophage–host symbiosis may benefit bacteria by providing selective advantages while passively replicating the phage genome.
In the environment
Metagenomics has allowed the in-water detection of bacteriophages that was not possible previously.
Also, bacteriophages have been used in hydrological tracing and modelling in river systems, especially where surface water and groundwater interactions occur. The use of phages is preferred to the more conventional dye marker because they are significantly less absorbed when passing through ground waters and they are readily detected at very low concentrations. Non-polluted water may contain approximately 2×108 bacteriophages per ml.
Bacteriophages are thought to contribute extensively to horizontal gene transfer in natural environments, principally via transduction, but also via transformation. Metagenomics-based studies also have revealed that viromes from a variety of environments harbor antibiotic-resistance genes, including those that could confer multidrug resistance.
In humans
Although phages do not infect humans, there are countless phage particles in the human body, given our extensive microbiome. Our phage population has been called the human phageome, including the "healthy gut phageome" (HGP) and the "diseased human phageome" (DHP). The active phageome of a healthy human (i.e., actively replicating as opposed to nonreplicating, integrated prophage) has been estimated to comprise dozens to thousands of different viruses.
There is evidence that bacteriophages and bacteria interact in the human gut microbiome both antagonistically and beneficially.
Preliminary studies have indicated that common bacteriophages are found in 62% of healthy individuals on average, while their prevalence was reduced by 42% and 54% on average in patients with ulcerative colitis (UC) and Crohn's disease (CD). Abundance of phages may also decline in the elderly.
The most common phages in the human intestine, found worldwide, are crAssphages. CrAssphages are transmitted from mother to child soon after birth, and there is some evidence suggesting that they may be transmitted locally. Each person develops their own unique crAssphage clusters. CrAss-like phages also may be present in primates besides humans.
Commonly studied bacteriophage
Among the countless phage, only a few have been studied in detail, including some historically important phage that were discovered in the early days of microbial genetics. These, especially the T-phage, helped to discover important principles of gene structure and function.
186 phage
λ phage
Φ6 phage
Φ29 phage
ΦX174
Bacteriophage φCb5
G4 phage
M13 phage
MS2 phage (23–28 nm in size)
N4 phage
P1 phage
P2 phage
P4 phage
R17 phage
T2 phage
T4 phage (169 kbp genome, 200 nm long)
T7 phage
T12 phage
See also
Bacterivore
CrAssphage
CRISPR
DNA viruses
Macrophage
Phage ecology
Phage monographs (a comprehensive listing of phage and phage-associated monographs, 1921–present)
Phagemid
Polyphage
RNA viruses
Transduction
Viriome
Virophage, viruses that infect other viruses
References
Bibliography
External links
Biology
|
https://en.wikipedia.org/wiki/Bactericide
|
A bactericide or bacteriocide, sometimes abbreviated Bcidal, is a substance which kills bacteria. Bactericides are disinfectants, antiseptics, or antibiotics.
However, material surfaces can also have bactericidal properties based solely on their physical surface structure, as for example biomaterials like insect wings.
Disinfectants
The most used disinfectants are those applying
active chlorine (i.e., hypochlorites, chloramines, dichloroisocyanurate and trichloroisocyanurate, wet chlorine, chlorine dioxide, etc.),
active oxygen (peroxides, such as peracetic acid, potassium persulfate, sodium perborate, sodium percarbonate, and urea perhydrate),
iodine (povidone-iodine, Lugol's solution, iodine tincture, iodinated nonionic surfactants),
concentrated alcohols (mainly ethanol, 1-propanol, called also n-propanol and 2-propanol, called isopropanol and mixtures thereof; further, 2-phenoxyethanol and 1- and 2-phenoxypropanols are used),
phenolic substances (such as phenol (also called "carbolic acid"), cresols such as thymol, halogenated (chlorinated, brominated) phenols, such as hexachlorophene, triclosan, trichlorophenol, tribromophenol, pentachlorophenol, salts and isomers thereof),
cationic surfactants, such as some quaternary ammonium cations (such as benzalkonium chloride, cetyl trimethylammonium bromide or chloride, didecyldimethylammonium chloride, cetylpyridinium chloride, benzethonium chloride) and others, non-quaternary compounds, such as chlorhexidine, glucoprotamine, octenidine dihydrochloride etc.),
strong oxidizers, such as ozone and permanganate solutions;
heavy metals and their salts, such as colloidal silver, silver nitrate, mercury chloride, phenylmercury salts, copper sulfate, copper oxide-chloride etc. Heavy metals and their salts are the most toxic and environment-hazardous bactericides and therefore their use is strongly discouraged or prohibited
strong acids (phosphoric, nitric, sulfuric, amidosulfuric, toluenesulfonic acids), pH < 1, and
alkalis (sodium, potassium, calcium hydroxides), such as of pH > 13, particularly under elevated temperature (above 60 °C), kills bacteria.
Antiseptics
As antiseptics (i.e., germicide agents that can be used on human or animal body, skin, mucoses, wounds and the like), few of the above-mentioned disinfectants can be used, under proper conditions (mainly concentration, pH, temperature and toxicity toward humans and animals). Among them, some important are
properly diluted chlorine preparations (f.e. Dakin's solution, 0.5% sodium or potassium hypochlorite solution, pH-adjusted to pH 7 – 8, or 0.5 – 1% solution of sodium benzenesulfochloramide (chloramine B)), some
iodine preparations, such as iodopovidone in various galenics (ointment, solutions, wound plasters), in the past also Lugol's solution,
peroxides such as urea perhydrate solutions and pH-buffered 0.1 – 0.25% peracetic acid solutions,
alcohols with or without antiseptic additives, used mainly for skin antisepsis,
weak organic acids such as sorbic acid, benzoic acid, lactic acid and salicylic acid
some phenolic compounds, such as hexachlorophene, triclosan and Dibromol, and
cationic surfactants, such as 0.05 – 0.5% benzalkonium, 0.5 – 4% chlorhexidine, 0.1 – 2% octenidine solutions.
Others are generally not applicable as safe antiseptics, either because of their corrosive or toxic nature.
Antibiotics
Bactericidal antibiotics kill bacteria; bacteriostatic antibiotics slow their growth or reproduction.
Bactericidal antibiotics that inhibit cell wall synthesis: the beta-lactam antibiotics (penicillin derivatives (penams), cephalosporins (cephems), monobactams, and carbapenems) and vancomycin.
Also bactericidal are daptomycin, fluoroquinolones, metronidazole, nitrofurantoin, co-trimoxazole, telithromycin.
Aminoglycosidic antibiotics are usually considered bactericidal, although they may be bacteriostatic with some organisms.
As of 2004, the distinction between bactericidal and bacteriostatic agents appeared to be clear according to the basic/clinical definition, but this only applies under strict laboratory conditions and it is important to distinguish microbiological and clinical definitions. The distinction is more arbitrary when agents are categorized in clinical situations. The supposed superiority of bactericidal agents over bacteriostatic agents is of little relevance when treating the vast majority of infections with gram-positive bacteria, particularly in patients with uncomplicated infections and noncompromised immune systems. Bacteriostatic agents have been effectively used for treatment that are considered to require bactericidal activity. Furthermore, some broad classes of antibacterial agents considered bacteriostatic can exhibit bactericidal activity against some bacteria on the basis of in vitro determination of MBC/MIC values. At high concentrations, bacteriostatic agents are often bactericidal against some susceptible organisms. The ultimate guide to treatment of any infection must be clinical outcome.
Surfaces
Material surfaces can exhibit bactericidal properties because of their crystallographic surface structure.
Somewhere in the mid-2000s it was shown that metallic nanoparticles can kill bacteria. The effect of a silver nanoparticle for example depends on its size with a preferential diameter of about 1-10 nm to interact with bacteria.
In 2013, cicada wings were found to have a selective anti-gram-negative bactericidal effect based on their physical surface structure. Mechanical deformation of the more or less rigid nanopillars found on the wing releases energy, striking and killing bacteria within minutes, hence called a mechano-bactericidal effect.
In 2020 researchers combined cationic polymer adsorption and femtosecond laser surface structuring to generate a bactericidal effect against both gram-positive Staphylococcus aureus and gram-negative Escherichia coli bacteria on borosilicate glass surfaces, providing a practical platform for the study of the bacteria-surface interaction.
See also
List of antibiotics
Microbicide
Virucide
References
|
https://en.wikipedia.org/wiki/Bohrium
|
Bohrium is a synthetic chemical element with the symbol Bh and atomic number 107. It is named after Danish physicist Niels Bohr. As a synthetic element, it can be created in particle accelerators but is not found in nature. All known isotopes of bohrium are highly radioactive; the most stable known isotope is 270Bh with a half-life of approximately 2.4 minutes, though the unconfirmed 278Bh may have a longer half-life of about 11.5 minutes.
In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 7 elements as the fifth member of the 6d series of transition metals. Chemistry experiments have confirmed that bohrium behaves as the heavier homologue to rhenium in group 7. The chemical properties of bohrium are characterized only partly, but they compare well with the chemistry of the other group 7 elements.
Introduction
History
Discovery
Two groups claimed discovery of the element. Evidence of bohrium was first reported in 1976 by a Soviet research team led by Yuri Oganessian, in which targets of bismuth-209 and lead-208 were bombarded with accelerated nuclei of chromium-54 and manganese-55 respectively. Two activities, one with a half-life of one to two milliseconds, and the other with an approximately five-second half-life, were seen. Since the ratio of the intensities of these two activities was constant throughout the experiment, it was proposed that the first was from the isotope bohrium-261 and that the second was from its daughter dubnium-257. Later, the dubnium isotope was corrected to dubnium-258, which indeed has a five-second half-life (dubnium-257 has a one-second half-life); however, the half-life observed for its parent is much shorter than the half-lives later observed in the definitive discovery of bohrium at Darmstadt in 1981. The IUPAC/IUPAP Transfermium Working Group (TWG) concluded that while dubnium-258 was probably seen in this experiment, the evidence for the production of its parent bohrium-262 was not convincing enough.
In 1981, a German research team led by Peter Armbruster and Gottfried Münzenberg at the GSI Helmholtz Centre for Heavy Ion Research (GSI Helmholtzzentrum für Schwerionenforschung) in Darmstadt bombarded a target of bismuth-209 with accelerated nuclei of chromium-54 to produce 5 atoms of the isotope bohrium-262:
+ → +
This discovery was further substantiated by their detailed measurements of the alpha decay chain of the produced bohrium atoms to previously known isotopes of fermium and californium. The IUPAC/IUPAP Transfermium Working Group (TWG) recognised the GSI collaboration as official discoverers in their 1992 report.
Proposed names
In September 1992, the German group suggested the name nielsbohrium with symbol Ns to honor the Danish physicist Niels Bohr. The Soviet scientists at the Joint Institute for Nuclear Research in Dubna, Russia had suggested this name be given to element 105 (which was finally called dubnium) and the German team wished to recognise both Bohr and the fact that the Dubna team had been the first to propose the cold fusion reaction, and simultaneously help to solve the controversial problem of the naming of element 105. The Dubna team agreed with the German group's naming proposal for element 107.
There was an element naming controversy as to what the elements from 104 to 106 were to be called; the IUPAC adopted unnilseptium (symbol Uns) as a temporary, systematic element name for this element. In 1994 a committee of IUPAC recommended that element 107 be named bohrium, not nielsbohrium, since there was no precedent for using a scientist's complete name in the naming of an element. This was opposed by the discoverers as there was some concern that the name might be confused with boron and in particular the distinguishing of the names of their respective oxyanions, bohrate and borate. The matter was handed to the Danish branch of IUPAC which, despite this, voted in favour of the name bohrium, and thus the name bohrium for element 107 was recognized internationally in 1997; the names of the respective oxyanions of boron and bohrium remain unchanged despite their homophony.
Isotopes
Bohrium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Twelve different isotopes of bohrium have been reported with atomic masses 260–262, 264–267, 270–272, 274, and 278, one of which, bohrium-262, has a known metastable state. All of these but the unconfirmed 278Bh decay only through alpha decay, although some unknown bohrium isotopes are predicted to undergo spontaneous fission.
The lighter isotopes usually have shorter half-lives; half-lives of under 100 ms for 260Bh, 261Bh, 262Bh, and 262mBh were observed. 264Bh, 265Bh, 266Bh, and 271Bh are more stable at around 1 s, and 267Bh and 272Bh have half-lives of about 10 s. The heaviest isotopes are the most stable, with 270Bh and 274Bh having measured half-lives of about 2.4 min and 40 s respectively, and the even heavier unconfirmed isotope 278Bh appearing to have an even longer half-life of about 11.5 minutes.
The most proton-rich isotopes with masses 260, 261, and 262 were directly produced by cold fusion, those with mass 262 and 264 were reported in the decay chains of meitnerium and roentgenium, while the neutron-rich isotopes with masses 265, 266, 267 were created in irradiations of actinide targets. The five most neutron-rich ones with masses 270, 271, 272, 274, and 278 (unconfirmed) appear in the decay chains of 282Nh, 287Mc, 288Mc, 294Ts, and 290Fl respectively. The half-lives of bohrium isotopes range from about ten milliseconds for 262mBh to about one minute for 270Bh and 274Bh, extending to about 11.5 minutes for the unconfirmed 278Bh, one of the longest-lived known superheavy nuclides.
Predicted properties
Very few properties of bohrium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that bohrium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of bohrium metal remain unknown and only predictions are available.
Chemical
Bohrium is the fifth member of the 6d series of transition metals and the heaviest member of group 7 in the periodic table, below manganese, technetium and rhenium. All the members of the group readily portray their group oxidation state of +7 and the state becomes more stable as the group is descended. Thus bohrium is expected to form a stable +7 state. Technetium also shows a stable +4 state whilst rhenium exhibits stable +4 and +3 states. Bohrium may therefore show these lower states as well. The higher +7 oxidation state is more likely to exist in oxyanions, such as perbohrate, , analogous to the lighter permanganate, pertechnetate, and perrhenate. Nevertheless, bohrium(VII) is likely to be unstable in aqueous solution, and would probably be easily reduced to the more stable bohrium(IV).
Technetium and rhenium are known to form volatile heptoxides M2O7 (M = Tc, Re), so bohrium should also form the volatile oxide Bh2O7. The oxide should dissolve in water to form perbohric acid, HBhO4.
Rhenium and technetium form a range of oxyhalides from the halogenation of the oxide. The chlorination of the oxide forms the oxychlorides MO3Cl, so BhO3Cl should be formed in this reaction. Fluorination results in MO3F and MO2F3 for the heavier elements in addition to the rhenium compounds ReOF5 and ReF7. Therefore, oxyfluoride formation for bohrium may help to indicate eka-rhenium properties. Since the oxychlorides are asymmetrical, and they should have increasingly large dipole moments going down the group, they should become less volatile in the order TcO3Cl > ReO3Cl > BhO3Cl: this was experimentally confirmed in 2000 by measuring the enthalpies of adsorption of these three compounds. The values are for TcO3Cl and ReO3Cl are −51 kJ/mol and −61 kJ/mol respectively; the experimental value for BhO3Cl is −77.8 kJ/mol, very close to the theoretically expected value of −78.5 kJ/mol.
Physical and atomic
Bohrium is expected to be a solid under normal conditions and assume a hexagonal close-packed crystal structure (c/a = 1.62), similar to its lighter congener rhenium. Early predictions by Fricke estimated its density at 37.1 g/cm3, but newer calculations predict a somewhat lower value of 26–27 g/cm3.
The atomic radius of bohrium is expected to be around 128 pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Bh+ ion is predicted to have an electron configuration of [Rn] 5f14 6d4 7s2, giving up a 6d electron instead of a 7s electron, which is the opposite of the behavior of its lighter homologues manganese and technetium. Rhenium, on the other hand, follows its heavier congener bohrium in giving up a 5d electron before a 6s electron, as relativistic effects have become significant by the sixth period, where they cause among other things the yellow color of gold and the low melting point of mercury. The Bh2+ ion is expected to have an electron configuration of [Rn] 5f14 6d3 7s2; in contrast, the Re2+ ion is expected to have a [Xe] 4f14 5d5 configuration, this time analogous to manganese and technetium. The ionic radius of hexacoordinate heptavalent bohrium is expected to be 58 pm (heptavalent manganese, technetium, and rhenium having values of 46, 57, and 53 pm respectively). Pentavalent bohrium should have a larger ionic radius of 83 pm.
Experimental chemistry
In 1995, the first report on attempted isolation of the element was unsuccessful, prompting new theoretical studies to investigate how best to investigate bohrium (using its lighter homologs technetium and rhenium for comparison) and removing unwanted contaminating elements such as the trivalent actinides, the group 5 elements, and polonium.
In 2000, it was confirmed that although relativistic effects are important, bohrium behaves like a typical group 7 element. A team at the Paul Scherrer Institute (PSI) conducted a chemistry reaction using six atoms of 267Bh produced in the reaction between 249Bk and 22Ne ions. The resulting atoms were thermalised and reacted with a HCl/O2 mixture to form a volatile oxychloride. The reaction also produced isotopes of its lighter homologues, technetium (as 108Tc) and rhenium (as 169Re). The isothermal adsorption curves were measured and gave strong evidence for the formation of a volatile oxychloride with properties similar to that of rhenium oxychloride. This placed bohrium as a typical member of group 7. The adsorption enthalpies of the oxychlorides of technetium, rhenium, and bohrium were measured in this experiment, agreeing very well with the theoretical predictions and implying a sequence of decreasing oxychloride volatility down group 7 of TcO3Cl > ReO3Cl > BhO3Cl.
2 Bh + 3 + 2 HCl → 2 +
The longer-lived heavy isotopes of bohrium, produced as the daughters of heavier elements, offer advantages for future radiochemical experiments. Although the heavy isotope 274Bh requires a rare and highly radioactive berkelium target for its production, the isotopes 272Bh, 271Bh, and 270Bh can be readily produced as daughters of more easily produced moscovium and nihonium isotopes.
Notes
References
Bibliography
External links
Bohrium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Transition metals
Synthetic elements
Chemical elements with hexagonal close-packed structure
|
https://en.wikipedia.org/wiki/Bipedalism
|
Bipedalism is a form of terrestrial locomotion where a tetrapod moves by means of its two rear (or lower) limbs or legs. An animal or machine that usually moves in a bipedal manner is known as a biped , meaning 'two feet' (from Latin bis 'double' and pes 'foot'). Types of bipedal movement include walking or running (a bipedal gait) and hopping.
Several groups of modern species are habitual bipeds whose normal method of locomotion is two-legged. In the Triassic period some groups of archosaurs (a group that includes crocodiles and dinosaurs) developed bipedalism; among the dinosaurs, all the early forms and many later groups were habitual or exclusive bipeds; the birds are members of a clade of exclusively bipedal dinosaurs, the theropods. Within mammals, habitual bipedalism has evolved multiple times, with the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and hominin apes (australopithecines, including humans) as well as various other extinct groups evolving the trait independently.
A larger number of modern species intermittently or briefly use a bipedal gait. Several lizard species move bipedally when running, usually to escape from threats. Many primate and bear species will adopt a bipedal gait in order to reach food or explore their environment, though there are a few cases where they walk on their hind limbs only. Several arboreal primate species, such as gibbons and indriids, exclusively walk on two legs during the brief periods they spend on the ground. Many animals rear up on their hind legs while fighting or copulating. Some animals commonly stand on their hind legs to reach food, keep watch, threaten a competitor or predator, or pose in courtship, but do not move bipedally.
Etymology
The word is derived from the Latin words bi(s) 'two' and ped- 'foot', as contrasted with quadruped 'four feet'.
Advantages
Limited and exclusive bipedalism can offer a species several advantages. Bipedalism raises the head; this allows a greater field of vision with improved detection of distant dangers or resources, access to deeper water for wading animals and allows the animals to reach higher food sources with their mouths. While upright, non-locomotory limbs become free for other uses, including manipulation (in primates and rodents), flight (in birds), digging (in the giant pangolin), combat (in bears, great apes and the large monitor lizard) or camouflage.
The maximum bipedal speed appears slower than the maximum speed of quadrupedal movement with a flexible backbone – both the ostrich and the red kangaroo can reach speeds of , while the cheetah can exceed . Even though bipedalism is slower at first, over long distances, it has allowed humans to outrun most other animals according to the endurance running hypothesis. Bipedality in kangaroo rats has been hypothesized to improve locomotor performance, which could aid in escaping from predators.
Facultative and obligate bipedalism
Zoologists often label behaviors, including bipedalism, as "facultative" (i.e. optional) or "obligate" (the animal has no reasonable alternative). Even this distinction is not completely clear-cut — for example, humans other than infants normally walk and run in biped fashion, but almost all can crawl on hands and knees when necessary. There are even reports of humans who normally walk on all fours with their feet but not their knees on the ground, but these cases are a result of conditions such as Uner Tan syndrome — very rare genetic neurological disorders rather than normal behavior. Even if one ignores exceptions caused by some kind of injury or illness, there are many unclear cases, including the fact that "normal" humans can crawl on hands and knees. This article therefore avoids the terms "facultative" and "obligate", and focuses on the range of styles of locomotion normally used by various groups of animals. Normal humans may be considered "obligate" bipeds because the alternatives are very uncomfortable and usually only resorted to when walking is impossible.
Movement
There are a number of states of movement commonly associated with bipedalism.
Standing. Staying still on both legs. In most bipeds this is an active process, requiring constant adjustment of balance.
Walking. One foot in front of another, with at least one foot on the ground at any time.
Running. One foot in front of another, with periods where both feet are off the ground.
Jumping/hopping. Moving by a series of jumps with both feet moving together.
Bipedal animals
The great majority of living terrestrial vertebrates are quadrupeds, with bipedalism exhibited by only a handful of living groups. Humans, gibbons and large birds walk by raising one foot at a time. On the other hand, most macropods, smaller birds, lemurs and bipedal rodents move by hopping on both legs simultaneously. Tree kangaroos are able to walk or hop, most commonly alternating feet when moving arboreally and hopping on both feet simultaneously when on the ground.
Extant reptiles
Many species of lizards become bipedal during high-speed, sprint locomotion, including the world's fastest lizard, the spiny-tailed iguana (genus Ctenosaura).
Early reptiles and lizards
The first known biped is the bolosaurid Eudibamus whose fossils date from 290 million years ago. Its long hind-legs, short forelegs, and distinctive joints all suggest bipedalism. The species became extinct in the early Permian.
Archosaurs (includes crocodilians and dinosaurs)
Birds
All birds are bipeds, as is the case for all theropod dinosaurs. However, hoatzin chicks have claws on their wings which they use for climbing.
Other archosaurs
Bipedalism evolved more than once in archosaurs, the group that includes both dinosaurs and crocodilians. All dinosaurs are thought to be descended from a fully bipedal ancestor, perhaps similar to Eoraptor.
Dinosaurs diverged from their archosaur ancestors approximately 230 million years ago during the Middle to Late Triassic period, roughly 20 million years after the Permian-Triassic extinction event wiped out an estimated 95 percent of all life on Earth. Radiometric dating of fossils from the early dinosaur genus Eoraptor establishes its presence in the fossil record at this time. Paleontologists suspect Eoraptor resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as Marasuchus and Lagerpeton in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators.
Bipedal movement also re-evolved in a number of other dinosaur lineages such as the iguanodonts. Some extinct members of Pseudosuchia, a sister group to the avemetatarsalians (the group including dinosaurs and relatives), also evolved bipedal forms – a poposauroid from the Triassic, Effigia okeeffeae, is thought to have been bipedal. Pterosaurs were previously thought to have been bipedal, but recent trackways have all shown quadrupedal locomotion.
Mammals
A number of groups of extant mammals have independently evolved bipedalism as their main form of locomotion - for example humans, giant pangolins, the extinct giant ground sloths, numerous species of jumping rodents and macropods. Humans, as their bipedalism has been extensively studied, are documented in the next section. Macropods are believed to have evolved bipedal hopping only once in their evolution, at some time no later than 45 million years ago.
Bipedal movement is less common among mammals, most of which are quadrupedal. All primates possess some bipedal ability, though most species primarily use quadrupedal locomotion on land. Primates aside, the macropods (kangaroos, wallabies and their relatives), kangaroo rats and mice, hopping mice and springhare move bipedally by hopping. Very few non-primate mammals commonly move bipedally with an alternating leg gait. Exceptions are the ground pangolin and in some circumstances the tree kangaroo. One black bear, Pedals, became famous locally and on the internet for having a frequent bipedal gait, although this is attributed to injuries on the bear's front paws. A two-legged fox was filmed in a Derbyshire garden in 2023, most likely having been born that way.
Primates
Most bipedal animals move with their backs close to horizontal, using a long tail to balance the weight of their bodies. The primate version of bipedalism is unusual because the back is close to upright (completely upright in humans), and the tail may be absent entirely. Many primates can stand upright on their hind legs without any support.
Chimpanzees, bonobos, gorillas, gibbons and baboons exhibit forms of bipedalism. On the ground sifakas move like all indrids with bipedal sideways hopping movements of the hind legs, holding their forelimbs up for balance. Geladas, although usually quadrupedal, will sometimes move between adjacent feeding patches with a squatting, shuffling bipedal form of locomotion. However, they can only do so for brief amounts, as their bodies are not adapted for constant bipedal locomotion.
Humans are the only primates who are normally biped, due to an extra curve in the spine which stabilizes the upright position, as well as shorter arms relative to the legs than is the case for the nonhuman great apes. The evolution of human bipedalism began in primates about four million years ago, or as early as seven million years ago with Sahelanthropus or about 12 million years ago with Danuvius guggenmosi. One hypothesis for human bipedalism is that it evolved as a result of differentially successful survival from carrying food to share with group members, although there are alternative hypotheses.
Injured individuals
Injured chimpanzees and bonobos have been capable of sustained bipedalism.
Three captive primates, one macaque Natasha and two chimps, Oliver and Poko (chimpanzee), were found to move bipedally. Natasha switched to exclusive bipedalism after an illness, while Poko was discovered in captivity in a tall, narrow cage. Oliver reverted to knuckle-walking after developing arthritis. Non-human primates often use bipedal locomotion when carrying food, or while moving through shallow water.
Limited bipedalism
Limited bipedalism in mammals
Other mammals engage in limited, non-locomotory, bipedalism. A number of other animals, such as rats, raccoons, and beavers will squat on their hindlegs to manipulate some objects but revert to four limbs when moving (the beaver will move bipedally if transporting wood for their dams, as will the raccoon when holding food). Bears will fight in a bipedal stance to use their forelegs as weapons. A number of mammals will adopt a bipedal stance in specific situations such as for feeding or fighting. Ground squirrels and meerkats will stand on hind legs to survey their surroundings, but will not walk bipedally. Dogs (e.g. Faith) can stand or move on two legs if trained, or if birth defect or injury precludes quadrupedalism. The gerenuk antelope stands on its hind legs while eating from trees, as did the extinct giant ground sloth and chalicotheres. The spotted skunk will walk on its front legs when threatened, rearing up on its front legs while facing the attacker so that its anal glands, capable of spraying an offensive oil, face its attacker.
Limited bipedalism in non-mammals (and non-birds)
Bipedalism is unknown among the amphibians. Among the non-archosaur reptiles bipedalism is rare, but it is found in the "reared-up" running of lizards such as agamids and monitor lizards. Many reptile species will also temporarily adopt bipedalism while fighting. One genus of basilisk lizard can run bipedally across the surface of water for some distance. Among arthropods, cockroaches are known to move bipedally at high speeds. Bipedalism is rarely found outside terrestrial animals, though at least two types of octopus walk bipedally on the sea floor using two of their arms, allowing the remaining arms to be used to camouflage the octopus as a mat of algae or a floating coconut.
Evolution of human bipedalism
There are at least twelve distinct hypotheses as to how and why bipedalism evolved in humans, and also some debate as to when. Bipedalism evolved well before the large human brain or the development of stone tools. Bipedal specializations are found in Australopithecus fossils from 4.2 to 3.9 million years ago and recent studies have suggested that obligate bipedal hominid species were present as early as 7 million years ago. Nonetheless, the evolution of bipedalism was accompanied by significant evolutions in the spine including the forward movement in position of the foramen magnum, where the spinal cord leaves the cranium. Recent evidence regarding modern human sexual dimorphism (physical differences between male and female) in the lumbar spine has been seen in pre-modern primates such as Australopithecus africanus. This dimorphism has been seen as an evolutionary adaptation of females to bear lumbar load better during pregnancy, an adaptation that non-bipedal primates would not need to make. Adapting bipedalism would have required less shoulder stability, which allowed the shoulder and other limbs to become more independent of each other and adapt for specific suspensory behaviors. In addition to the change in shoulder stability, changing locomotion would have increased the demand for shoulder mobility, which would have propelled the evolution of bipedalism forward. The different hypotheses are not necessarily mutually exclusive and a number of selective forces may have acted together to lead to human bipedalism. It is important to distinguish between adaptations for bipedalism and adaptations for running, which came later still.
The form and function of modern-day humans' upper bodies appear to have evolved from living in a more forested setting. Living in this kind of environment would have made it so that being able to travel arboreally would have been advantageous at the time. It has also been proposed that, like some modern-day apes, early hominins had undergone a knuckle-walking stage prior to adapting the back limbs for bipedality while retaining forearms capable of grasping. Numerous causes for the evolution of human bipedalism involve freeing the hands for carrying and using tools, sexual dimorphism in provisioning, changes in climate and environment (from jungle to savanna) that favored a more elevated eye-position, and to reduce the amount of skin exposed to the tropical sun. It is possible that bipedalism provided a variety of benefits to the hominin species, and scientists have suggested multiple reasons for evolution of human bipedalism. There is also not only the question of why the earliest hominins were partially bipedal but also why hominins became more bipedal over time. For example, the postural feeding hypothesis describes how the earliest hominins became bipedal for the benefit of reaching food in trees while the savanna-based theory describes how the late hominins that started to settle on the ground became increasingly bipedal.
Multiple factors
Napier (1963) argued that it is unlikely that a single factor drove the evolution of bipedalism. He stated "It seems unlikely that any single factor was responsible for such a dramatic change in behaviour. In addition to the advantages of accruing from ability to carry objects – food or otherwise – the improvement of the visual range and the freeing of the hands for purposes of defence and offence may equally have played their part as catalysts." Sigmon (1971) demonstrated that chimpanzees exhibit bipedalism in different contexts, and one single factor should be used to explain bipedalism: preadaptation for human bipedalism. Day (1986) emphasized three major pressures that drove evolution of bipedalism: food acquisition, predator avoidance, and reproductive success. Ko (2015) stated that there are two questions main regarding bipedalism 1. Why were the earliest hominins partially bipedal? and 2. Why did hominins become more bipedal over time? He argued that these questions can be answered with combination of prominent theories such as Savanna-based, Postural feeding, and Provisioning.
Savannah-based theory
According to the Savanna-based theory, hominines came down from the tree's branches and adapted to life on the savanna by walking erect on two feet. The theory suggests that early hominids were forced to adapt to bipedal locomotion on the open savanna after they left the trees. One of the proposed mechanisms was the knuckle-walking hypothesis, which states that human ancestors used quadrupedal locomotion on the savanna, as evidenced by morphological characteristics found in Australopithecus anamensis and Australopithecus afarensis forelimbs, and that it is less parsimonious to assume that knuckle walking developed twice in genera Pan and Gorilla instead of evolving it once as synapomorphy for Pan and Gorilla before losing it in Australopithecus. The evolution of an orthograde posture would have been very helpful on a savanna as it would allow the ability to look over tall grasses in order to watch out for predators, or terrestrially hunt and sneak up on prey. It was also suggested in P. E. Wheeler's "The evolution of bipedality and loss of functional body hair in hominids", that a possible advantage of bipedalism in the savanna was reducing the amount of surface area of the body exposed to the sun, helping regulate body temperature. In fact, Elizabeth Vrba's turnover pulse hypothesis supports the savanna-based theory by explaining the shrinking of forested areas due to global warming and cooling, which forced animals out into the open grasslands and caused the need for hominids to acquire bipedality.
Others state hominines had already achieved the bipedal adaptation that was used in the savanna. The fossil evidence reveals that early bipedal hominins were still adapted to climbing trees at the time they were also walking upright. It is possible that bipedalism evolved in the trees, and was later applied to the savanna as a vestigial trait. Humans and orangutans are both unique to a bipedal reactive adaptation when climbing on thin branches, in which they have increased hip and knee extension in relation to the diameter of the branch, which can increase an arboreal feeding range and can be attributed to a convergent evolution of bipedalism evolving in arboreal environments. Hominine fossils found in dry grassland environments led anthropologists to believe hominines lived, slept, walked upright, and died only in those environments because no hominine fossils were found in forested areas. However, fossilization is a rare occurrence—the conditions must be just right in order for an organism that dies to become fossilized for somebody to find later, which is also a rare occurrence. The fact that no hominine fossils were found in forests does not ultimately lead to the conclusion that no hominines ever died there. The convenience of the savanna-based theory caused this point to be overlooked for over a hundred years.
Some of the fossils found actually showed that there was still an adaptation to arboreal life. For example, Lucy, the famous Australopithecus afarensis, found in Hadar in Ethiopia, which may have been forested at the time of Lucy's death, had curved fingers that would still give her the ability to grasp tree branches, but she walked bipedally. "Little Foot," a nearly-complete specimen of Australopithecus africanus, has a divergent big toe as well as the ankle strength to walk upright. "Little Foot" could grasp things using his feet like an ape, perhaps tree branches, and he was bipedal. Ancient pollen found in the soil in the locations in which these fossils were found suggest that the area used to be much more wet and covered in thick vegetation and has only recently become the arid desert it is now.
Traveling efficiency hypothesis
An alternative explanation is that the mixture of savanna and scattered forests increased terrestrial travel by proto-humans between clusters of trees, and bipedalism offered greater efficiency for long-distance travel between these clusters than quadrupedalism. In an experiment monitoring chimpanzee metabolic rate via oxygen consumption, it was found that the quadrupedal and bipedal energy costs were very similar, implying that this transition in early ape-like ancestors would not have been very difficult or energetically costing. This increased travel efficiency is likely to have been selected for as it assisted foraging across widely dispersed resources.
Postural feeding hypothesis
The postural feeding hypothesis has been recently supported by Dr. Kevin Hunt, a professor at Indiana University. This hypothesis asserts that chimpanzees were only bipedal when they eat. While on the ground, they would reach up for fruit hanging from small trees and while in trees, bipedalism was used to reach up to grab for an overhead branch. These bipedal movements may have evolved into regular habits because they were so convenient in obtaining food. Also, Hunt's hypotheses states that these movements coevolved with chimpanzee arm-hanging, as this movement was very effective and efficient in harvesting food. When analyzing fossil anatomy, Australopithecus afarensis has very similar features of the hand and shoulder to the chimpanzee, which indicates hanging arms. Also, the Australopithecus hip and hind limb very clearly indicate bipedalism, but these fossils also indicate very inefficient locomotive movement when compared to humans. For this reason, Hunt argues that bipedalism evolved more as a terrestrial feeding posture than as a walking posture.
A related study conducted by University of Birmingham, Professor Susannah Thorpe examined the most arboreal great ape, the orangutan, holding onto supporting branches in order to navigate branches that were too flexible or unstable otherwise. In more than 75 percent of observations, the orangutans used their forelimbs to stabilize themselves while navigating thinner branches. Increased fragmentation of forests where A. afarensis as well as other ancestors of modern humans and other apes resided could have contributed to this increase of bipedalism in order to navigate the diminishing forests. Findings also could shed light on discrepancies observed in the anatomy of A. afarensis, such as the ankle joint, which allowed it to "wobble" and long, highly flexible forelimbs. If bipedalism started from upright navigation in trees, it could explain both increased flexibility in the ankle as well as long forelimbs which grab hold of branches.
Provisioning model
One theory on the origin of bipedalism is the behavioral model presented by C. Owen Lovejoy, known as "male provisioning". Lovejoy theorizes that the evolution of bipedalism was linked to monogamy. In the face of long inter-birth intervals and low reproductive rates typical of the apes, early hominids engaged in pair-bonding that enabled greater parental effort directed towards rearing offspring. Lovejoy proposes that male provisioning of food would improve the offspring survivorship and increase the pair's reproductive rate. Thus the male would leave his mate and offspring to search for food and return carrying the food in his arms walking on his legs. This model is supported by the reduction ("feminization") of the male canine teeth in early hominids such as Sahelanthropus tchadensis and Ardipithecus ramidus, which along with low body size dimorphism in Ardipithecus and Australopithecus, suggests a reduction in inter-male antagonistic behavior in early hominids. In addition, this model is supported by a number of modern human traits associated with concealed ovulation (permanently enlarged breasts, lack of sexual swelling) and low sperm competition (moderate sized testes, low sperm mid-piece volume) that argues against recent adaptation to a polygynous reproductive system.
However, this model has been debated, as others have argued that early bipedal hominids were instead polygynous. Among most monogamous primates, males and females are about the same size. That is sexual dimorphism is minimal, and other studies have suggested that Australopithecus afarensis males were nearly twice the weight of females. However, Lovejoy's model posits that the larger range a provisioning male would have to cover (to avoid competing with the female for resources she could attain herself) would select for increased male body size to limit predation risk. Furthermore, as the species became more bipedal, specialized feet would prevent the infant from conveniently clinging to the mother - hampering the mother's freedom and thus make her and her offspring more dependent on resources collected by others. Modern monogamous primates such as gibbons tend to be also territorial, but fossil evidence indicates that Australopithecus afarensis lived in large groups. However, while both gibbons and hominids have reduced canine sexual dimorphism, female gibbons enlarge ('masculinize') their canines so they can actively share in the defense of their home territory. Instead, the reduction of the male hominid canine is consistent with reduced inter-male aggression in a pair-bonded though group living primate.
Early bipedalism in homininae model
Recent studies of 4.4 million years old Ardipithecus ramidus suggest bipedalism. It is thus possible that bipedalism evolved very early in homininae and was reduced in chimpanzee and gorilla when they became more specialized. Other recent studies of the foot structure of Ardipithecus ramidus suggest that the species was closely related to African-ape ancestors. This possibly provides a species close to the true connection between fully bipedal hominins and quadruped apes. According to Richard Dawkins in his book "The Ancestor's Tale", chimps and bonobos are descended from Australopithecus gracile type species while gorillas are descended from Paranthropus. These apes may have once been bipedal, but then lost this ability when they were forced back into an arboreal habitat, presumably by those australopithecines from whom eventually evolved hominins. Early hominines such as Ardipithecus ramidus may have possessed an arboreal type of bipedalism that later independently evolved towards knuckle-walking in chimpanzees and gorillas and towards efficient walking and running in modern humans (see figure). It is also proposed that one cause of Neanderthal extinction was a less efficient running.
Warning display (aposematic) model
Joseph Jordania from the University of Melbourne recently (2011) suggested that bipedalism was one of the central elements of the general defense strategy of early hominids, based on aposematism, or warning display and intimidation of potential predators and competitors with exaggerated visual and audio signals. According to this model, hominids were trying to stay as visible and as loud as possible all the time. Several morphological and behavioral developments were employed to achieve this goal: upright bipedal posture, longer legs, long tightly coiled hair on the top of the head, body painting, threatening synchronous body movements, loud voice and extremely loud rhythmic singing/stomping/drumming on external subjects. Slow locomotion and strong body odor (both characteristic for hominids and humans) are other features often employed by aposematic species to advertise their non-profitability for potential predators.
Other behavioural models
There are a variety of ideas which promote a specific change in behaviour as the key driver for the evolution of hominid bipedalism. For example, Wescott (1967) and later Jablonski & Chaplin (1993) suggest that bipedal threat displays could have been the transitional behaviour which led to some groups of apes beginning to adopt bipedal postures more often. Others (e.g. Dart 1925) have offered the idea that the need for more vigilance against predators could have provided the initial motivation. Dawkins (e.g. 2004) has argued that it could have begun as a kind of fashion that just caught on and then escalated through sexual selection. And it has even been suggested (e.g. Tanner 1981:165) that male phallic display could have been the initial incentive, as well as increased sexual signaling in upright female posture.
Thermoregulatory model
The thermoregulatory model explaining the origin of bipedalism is one of the simplest theories so far advanced, but it is a viable explanation. Dr. Peter Wheeler, a professor of evolutionary biology, proposes that bipedalism raises the amount of body surface area higher above the ground which results in a reduction in heat gain and helps heat dissipation. When a hominid is higher above the ground, the organism accesses more favorable wind speeds and temperatures. During heat seasons, greater wind flow results in a higher heat loss, which makes the organism more comfortable. Also, Wheeler explains that a vertical posture minimizes the direct exposure to the sun whereas quadrupedalism exposes more of the body to direct exposure. Analysis and interpretations of Ardipithecus reveal that this hypothesis needs modification to consider that the forest and woodland environmental preadaptation of early-stage hominid bipedalism preceded further refinement of bipedalism by the pressure of natural selection. This then allowed for the more efficient exploitation of the hotter conditions ecological niche, rather than the hotter conditions being hypothetically bipedalism's initial stimulus. A feedback mechanism from the advantages of bipedality in hot and open habitats would then in turn make a forest preadaptation solidify as a permanent state.
Carrying models
Charles Darwin wrote that "Man could not have attained his present dominant position in the world without the use of his hands, which are so admirably adapted to the act of obedience of his will". Darwin (1871:52) and many models on bipedal origins are based on this line of thought. Gordon Hewes (1961) suggested that the carrying of meat "over considerable distances" (Hewes 1961:689) was the key factor. Isaac (1978) and Sinclair et al. (1986) offered modifications of this idea, as indeed did Lovejoy (1981) with his "provisioning model" described above. Others, such as Nancy Tanner (1981), have suggested that infant carrying was key, while others again have suggested stone tools and weapons drove the change. This stone-tools theory is very unlikely, as though ancient humans were known to hunt, the discovery of tools was not discovered for thousands of years after the origin of bipedalism, chronologically precluding it from being a driving force of evolution. (Wooden tools and spears fossilize poorly and therefore it is difficult to make a judgment about their potential usage.)
Wading models
The observation that large primates, including especially the great apes, that predominantly move quadrupedally on dry land, tend to switch to bipedal locomotion in waist deep water, has led to the idea that the origin of human bipedalism may have been influenced by waterside environments. This idea, labelled "the wading hypothesis", was originally suggested by the Oxford marine biologist Alister Hardy who said: "It seems to me likely that Man learnt to stand erect first in water and then, as his balance improved, he found he became better equipped for standing up on the shore when he came out, and indeed also for running." It was then promoted by Elaine Morgan, as part of the aquatic ape hypothesis, who cited bipedalism among a cluster of other human traits unique among primates, including voluntary control of breathing, hairlessness and subcutaneous fat. The "aquatic ape hypothesis", as originally formulated, has not been accepted or considered a serious theory within the anthropological scholarly community. Others, however, have sought to promote wading as a factor in the origin of human bipedalism without referring to further ("aquatic ape" related) factors. Since 2000 Carsten Niemitz has published a series of papers and a book on a variant of the wading hypothesis, which he calls the "amphibian generalist theory" ().
Other theories have been proposed that suggest wading and the exploitation of aquatic food sources (providing essential nutrients for human brain evolution or critical fallback foods) may have exerted evolutionary pressures on human ancestors promoting adaptations which later assisted full-time bipedalism. It has also been thought that consistent water-based food sources had developed early hominid dependency and facilitated dispersal along seas and rivers.
Consequences
Prehistoric fossil records show that early hominins first developed bipedalism before being followed by an increase in brain size. The consequences of these two changes in particular resulted in painful and difficult labor due to the increased favor of a narrow pelvis for bipedalism being countered by larger heads passing through the constricted birth canal. This phenomenon is commonly known as the obstetrical dilemma.
Non-human primates habitually deliver their young on their own, but the same cannot be said for modern-day humans. Isolated birth appears to be rare and actively avoided cross-culturally, even if birthing methods may differ between said cultures. This is due to the fact that the narrowing of the hips and the change in the pelvic angle caused a discrepancy in the ratio of the size of the head to the birth canal. The result of this is that there is greater difficulty in birthing for hominins in general, let alone to be doing it by oneself.
Physiology
Bipedal movement occurs in a number of ways and requires many mechanical and neurological adaptations. Some of these are described below.
Biomechanics
Standing
Energy-efficient means of standing bipedally involve constant adjustment of balance, and of course these must avoid overcorrection. The difficulties associated with simple standing in upright humans are highlighted by the greatly increased risk of falling present in the elderly, even with minimal reductions in control system effectiveness.
Shoulder stability
Shoulder stability would decrease with the evolution of bipedalism. Shoulder mobility would increase because the need for a stable shoulder is only present in arboreal habitats. Shoulder mobility would support suspensory locomotion behaviors which are present in human bipedalism. The forelimbs are freed from weight-bearing requirements, which makes the shoulder a place of evidence for the evolution of bipedalism.
Walking
Unlike non-human apes that are able to practice bipedality such as Pan and Gorilla, hominins have the ability to move bipedally without the utilization of a bent-hip-bent-knee (BHBK) gait, which requires the engagement of both the hip and the knee joints. This human ability to walk is made possible by the spinal curvature humans have that non-human apes do not. Rather, walking is characterized by an "inverted pendulum" movement in which the center of gravity vaults over a stiff leg with each step. Force plates can be used to quantify the whole-body kinetic & potential energy, with walking displaying an out-of-phase relationship indicating exchange between the two. This model applies to all walking organisms regardless of the number of legs, and thus bipedal locomotion does not differ in terms of whole-body kinetics.
In humans, walking is composed of several separate processes:
Vaulting over a stiff stance leg
Passive ballistic movement of the swing leg
A short 'push' from the ankle prior to toe-off, propelling the swing leg
Rotation of the hips about the axis of the spine, to increase stride length
Rotation of the hips about the horizontal axis to improve balance during stance
Running
Early hominins underwent post-cranial changes in order to better adapt to bipedality, especially running. One of these changes is having longer hindlimbs proportional to the forelimbs and their effects. As previously mentioned, longer hindlimbs assist in thermoregulation by reducing the total surface area exposed to direct sunlight while simultaneously allowing for more space for cooling winds. Additionally, having longer limbs is more energy-efficient, since longer limbs mean that overall muscle strain is lessened. Better energy efficiency, in turn, means higher endurance, particularly when running long distances.
Running is characterized by a spring-mass movement. Kinetic and potential energy are in phase, and the energy is stored & released from a spring-like limb during foot contact, achieved by the plantar arch and the Achilles tendon in the foot and leg, respectively. Again, the whole-body kinetics are similar to animals with more limbs.
Musculature
Bipedalism requires strong leg muscles, particularly in the thighs. Contrast in domesticated poultry the well muscled legs, against the small and bony wings. Likewise in humans, the quadriceps and hamstring muscles of the thigh are both so crucial to bipedal activities that each alone is much larger than the well-developed biceps of the arms. In addition to the leg muscles, the increased size of the gluteus maximus in humans is an important adaptation as it provides support and stability to the trunk and lessens the amount of stress on the joints when running.
Respiration
Quadrupeds, have more restrictive breathing respire while moving than do bipedal humans. "Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running."
Respiration through bipedality means that there is better breath control in bipeds, which can be associated with brain growth. The modern brain utilizes approximately 20% of energy input gained through breathing and eating, as opposed to species like chimpanzees who use up twice as much energy as humans for the same amount of movement. This excess energy, leading to brain growth, also leads to the development of verbal communication. This is because breath control means that the muscles associated with breathing can be manipulated into creating sounds. This means that the onset of bipedality, leading to more efficient breathing, may be related to the origin of verbal language.
Bipedal robots
For nearly the whole of the 20th century, bipedal robots were very difficult to construct and robot locomotion involved only wheels, treads, or multiple legs. Recent cheap and compact computing power has made two-legged robots more feasible. Some notable biped robots are ASIMO, HUBO, MABEL and QRIO. Recently, spurred by the success of creating a fully passive, un-powered bipedal walking robot, those working on such machines have begun using principles gleaned from the study of human and animal locomotion, which often relies on passive mechanisms to minimize power consumption.
See also
Allometry
Orthograde posture
Quadrupedalism
Notes
References
Further reading
Darwin, C., "The Descent of Man and Selection in Relation to Sex", Murray (London), (1871).
Dart, R. A., "Australopithecus africanus: The Ape Man of South Africa" Nature, 145, 195–199, (1925).
Dawkins, R., "The Ancestor's Tale", Weidenfeld and Nicolson (London), (2004).
DeSilva, J., "First Steps: How Upright Walking Made Us Human" HarperCollins (New York), (2021)
Hewes, G. W., "Food Transport and the Origin of Hominid Bipedalism" American Anthropologist, 63, 687–710, (1961).
Hunt, K. D., "The Evolution of Human Bipedality" Journal of Human Evolution, 26, 183–202, (1994).
Isaac, G. I., "The Archeological Evidence for the Activities of Early African Hominids" In:Early Hominids of Africa (Jolly, C.J. (Ed.)), Duckworth (London), 219–254, (1978).
Tanner, N. M., "On Becoming Human", Cambridge University Press (Cambridge), (1981)
Wheeler, P. E. (1984) "The Evolution of Bipedality and Loss of Functional Body Hair in Hominoids." Journal of Human Evolution, 13, 91–98,
External links
The Origin of Bipedalism
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016)
Terrestrial locomotion
Animal anatomy
2 (number)
|
https://en.wikipedia.org/wiki/Bioinformatics
|
Bioinformatics () is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is referred to as computational biology.
Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences.
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions.
History
The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems).
Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology.
Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics.
Sequences
There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less.
Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991.
In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful.
Goals
In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures.
Important sub-disciplines within bioinformatics and computational biology include:
Development and implementation of computer programs to efficiently access, manage, and use various types of information.
Development of new mathematical algorithms and statistical measures to assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences.
The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis.
Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.
Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.
Sequence analysis
Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides.
DNA sequencing
Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing.
Sequence assembly
Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research.
Genome annotation
In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics.
Genome annotation can be classified into three levels: the nucleotide, protein, and process levels.
Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome.
The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function.
Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem.
The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium Haemophilus influenzae. The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).
Gene function prediction
While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions.
Computational evolutionary biology
Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to:
trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation,
build complex computational population genetics models to predict the outcome of the system over time
track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
Comparative genomics
The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.
Many of these studies are based on the detection of sequence homology to assign sequences to protein families.
Pan genomics
Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species.
Genetics of disease
As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides.
Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes.
Analysis of mutations in cancer
In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.
Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.
Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.
Gene and protein expression
Analysis of gene expression
The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of protein expression
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays.
Analysis of regulation
Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process.
For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments.
Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods.
Analysis of cellular organization
Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases.
Microscopy and image analysis
Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases.
Protein localization
Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools.
Nuclear organization of chromatin
Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space.
Structural bioinformatics
Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models.
Amino acid sequence
The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the secondary, tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time.
Homology
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins.
One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor.
Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.
Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies.
A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.
Network and systems biology
Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.
Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Molecular interaction networks
Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.
Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions.
Biodiversity informatics
Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change.
Others
Literature analysis
The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:
Abbreviation recognition – identify the long-form and abbreviation of biological terms
Named-entity recognition – recognizing biological terms such as gene names
Protein–protein interaction – identify which proteins interact with which proteins from text
The area of research draws from statistics and computational linguistics.
High-throughput image analysis
Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are:
high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics)
morphometrics
clinical image analysis and visualization
determining the real-time air-flow patterns in breathing lungs of living animals
quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury
making behavioral observations from extended video recordings of laboratory animals
infrared measurements for metabolic activity determination
inferring clone overlaps in DNA mapping, e.g. the Sulston score
High-throughput single cell data analysis
Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.
Ontologies and data integration
Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.
The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes.
Databases
Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private.
Some of the most commonly used databases are listed below:
Used in biological sequence analysis: Genbank, UniProt
Used in structure analysis: Protein Data Bank (PDB)
Used in finding Protein Families and Motif Finding: InterPro, Pfam
Used for Next Generation Sequencing: Sequence Read Archive
Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks
Used in design of synthetic genetic circuits: GenoCAD
Software and tools
Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions.
Open-source bioinformatics software
Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration.
Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD.
The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software.
Web services in bioinformatics
SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.
Bioinformatics workflow management systems
A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to
provide an easy-to-use environment for individual application scientists themselves to create their own workflows,
provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time,
simplify the process of sharing and reusing workflows between the scientists, and
enable scientists to track the provenance of the workflow execution results and the workflow creation steps.
Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE.
BioCompute and BioCompute Objects
In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University.
It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff.
In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators.
Education platforms
Bioinformatics is not only taught as in-person masters degree at many universities. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4283 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4283π operating system.
MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization (UC San Diego) and Genomic Data Science Specialization (Johns Hopkins) as well as EdX's Data Analysis for Life Sciences XSeries (Harvard).
Conferences
There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB).
See also
References
Further reading
Sehgal et al. : Structural, phylogenetic and docking studies of D-amino acid oxidase activator(DAOA ), a candidate schizophrenia gene. Theoretical Biology and Medical Modelling 2013 10 :3.
Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview , Communications of Computer Society of India, January 2007
Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. (Chapman & Hall/Crc Computer and Information Science Series)
Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001.
Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003.
Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins, third edition. Wiley, 2005.
Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 2007.
Cristianini, N. and Hahn, M. Introduction to Computational Genomics , Cambridge University Press, 2006. ( |)
Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 1998.
Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics Problems. Wiley, 2005.
Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002.
Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005.
Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University Press, 2005.
Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000.
Soinov, L. Bioinformatics and Pattern Recognition Come Together Journal of Pattern Recognition Research (JPRR ), Vol 1 (1) 2006 p. 37–41
Stevens, Hallam, Life Out of Sequence: A Data-Driven History of Bioinformatics, Chicago: The University of Chicago Press, 2013,
Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001.
Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report
Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology (1995)
Foundations of Computational and Systems Biology MIT Course
Computational Biology: Genomes, Networks, Evolution Free MIT Course
External links
Bioinformatics Resource Portal (SIB)
|
https://en.wikipedia.org/wiki/Barter
|
In trade, barter (derived from baretor) is a system of exchange in which participants in a transaction directly exchange goods or services for other goods or services without using a medium of exchange, such as money. Economists usually distinguish barter from gift economies in many ways; barter, for example, features immediate reciprocal exchange, not one delayed in time. Barter usually takes place on a bilateral basis, but may be multilateral (if it is mediated through a trade exchange). In most developed countries, barter usually exists parallel to monetary systems only to a very limited extent. Market actors use barter as a replacement for money as the method of exchange in times of monetary crisis, such as when currency becomes unstable (such as hyperinflation or a deflationary spiral) or simply unavailable for conducting commerce.
No ethnographic studies have shown that any present or past society has used barter without any other medium of exchange or measurement, and anthropologists have found no evidence that money emerged from barter. They instead found that gift-giving (credit extended on a personal basis with an inter-personal balance maintained over the long term) was the most usual means of exchange of goods and services. Nevertheless, economists since the times of Adam Smith (1723–1790) often inaccurately imagined pre-modern societies as examples to use the inefficiency of barter to explain the emergence of money, of "the" economy, and hence of the discipline of economics itself.
Economic theory
Adam Smith on the origin of money
Adam Smith sought to demonstrate that markets (and economies) pre-existed the state. He argued that money was not the creation of governments. Markets emerged, in his view, out of the division of labour, by which individuals began to specialize in specific crafts and hence had to depend on others for subsistence goods. These goods were first exchanged by barter. Specialization depended on trade but was hindered by the "double coincidence of wants" which barter requires, i.e., for the exchange to occur, each participant must want what the other has. To complete this hypothetical history, craftsmen would stockpile one particular good, be it salt or metal, that they thought no one would refuse. This is the origin of money according to Smith. Money, as a universally desired medium of exchange, allows each half of the transaction to be separated.
Barter is characterized in Adam Smith's "The Wealth of Nations" by a disparaging vocabulary: "haggling, swapping, dickering". It has also been characterized as negative reciprocity, or "selfish profiteering".
Anthropologists have argued, in contrast, "that when something resembling barter does occur in stateless societies it is almost always between strangers." Barter occurred between strangers, not fellow villagers, and hence cannot be used to naturalistically explain the origin of money without the state. Since most people engaged in trade knew each other, exchange was fostered through the extension of credit. Marcel Mauss, author of 'The Gift', argued that the first economic contracts were to not act in one's economic self-interest, and that before money, exchange was fostered through the processes of reciprocity and redistribution, not barter. Everyday exchange relations in such societies are characterized by generalized reciprocity, or a non-calculative familial "communism" where each takes according to their needs, and gives as they have.
Features of bartering
Often the following features are associated with barter transactions:
There is a demand focus for things of a different kind.
Most often, parties trade goods and services for goods or services that differ from what they are willing to forego.
The parties of the barter transaction are both equal and free.
Neither party has advantages over the other, and both are free to leave the trade at any point in time.
The transaction happens simultaneously.
The goods are normally traded at the same point in time. Nonetheless delayed barter in goods may rarely occur as well. In the case of services being traded however, the two parts of the trade may be separated.
The transaction is transformative.
A barter transaction "moves objects between the regimes of value", meaning that a good or service that is being traded may take up a new meaning or value under its recipient than that of its original owner.
There is no criterion of value.
There is no real way to value each side of the trade. There is bargaining taking place, not to do with the value of each party's good or service, but because each player in the transaction wants what is offered by the other.
Advantages
Since direct barter does not require payment in money, it can be utilized when money is in short supply, when there is little information about the credit worthiness of trade partners, or when there is a lack of trust between those trading.
Barter is an option to those who cannot afford to store their small supply of wealth in money, especially in hyperinflation situations where money devalues quickly.
Limitations
The limitations of barter are often explained in terms of its inefficiencies in facilitating exchange in comparison to money.
It is said that barter is 'inefficient' because:
There needs to be a 'double coincidence of wants'
For barter to occur between two parties, both parties need to have what the other wants.
There is no common measure of value/ No Standard Unit of Account
In a monetary economy, money plays the role of a measure of value of all goods, so their values can be assessed against each other; this role may be absent in a barter economy.
Indivisibility of certain goods
If a person wants to buy a certain amount of another's goods, but only has for payment one indivisible unit of another good which is worth more than what the person wants to obtain, a barter transaction cannot occur.
Lack of standards for deferred payments
This is related to the absence of a common measure of value, although if the debt is denominated in units of the good that will eventually be used in payment, it is not a problem.
Difficulty in storing wealth
If a society relies exclusively on perishable goods, storing wealth for the future may be impractical. However, some barter economies rely on durable goods like sheep or cattle for this purpose.
History
Silent trade
Other anthropologists have questioned whether barter is typically between "total" strangers, a form of barter known as "silent trade". Silent trade, also called silent barter, dumb barter ("dumb" here used in its old meaning of "mute"), or depot trade, is a method by which traders who cannot speak each other's language can trade without talking. However, Benjamin Orlove has shown that while barter occurs through "silent trade" (between strangers), it occurs in commercial markets as well. "Because barter is a difficult way of conducting trade, it will occur only where there are strong institutional constraints on the use of money or where the barter symbolically denotes a special social relationship and is used in well-defined conditions. To sum up, multipurpose money in markets is like lubrication for machines - necessary for the most efficient function, but not necessary for the existence of the market itself."
In his analysis of barter between coastal and inland villages in the Trobriand Islands, Keith Hart highlighted the difference between highly ceremonial gift exchange between community leaders, and the barter that occurs between individual households. The haggling that takes place between strangers is possible because of the larger temporary political order established by the gift exchanges of leaders. From this he concludes that barter is "an atomized interaction predicated upon the presence of society" (i.e. that social order established by gift exchange), and not typical between complete strangers.
Times of monetary crisis
As Orlove noted, barter may occur in commercial economies, usually during periods of monetary crisis. During such a crisis, currency may be in short supply, or highly devalued through hyperinflation. In such cases, money ceases to be the universal medium of exchange or standard of value. Money may be in such short supply that it becomes an item of barter itself rather than the means of exchange. Barter may also occur when people cannot afford to keep money (as when hyperinflation quickly devalues it).
An example of this would be during the Crisis in Bolivarian Venezuela, when Venezuelans resorted to bartering as a result of hyperinflation. The increasingly low value of bank notes, and their lack of circulation in suburban areas, meant that many Venezuelans, especially those living outside of larger cities, took to the trading over their own goods for even the most basic of transactions.
Additionally, in the wake of the 2008 financial crisis, barter exchanges reported a double-digit increase in membership, due to the scarcity of fiat money, and the degradation of monetary system sentiment.
Exchanges
Economic historian Karl Polanyi has argued that where barter is widespread, and cash supplies limited, barter is aided by the use of credit, brokerage, and money as a unit of account (i.e. used to price items). All of these strategies are found in ancient economies including Ptolemaic Egypt. They are also the basis for more recent barter exchange systems.
While one-to-one bartering is practised between individuals and businesses on an informal basis, organized barter exchanges have developed to conduct third party bartering which helps overcome some of the limitations of barter. A barter exchange operates as a broker and bank in which each participating member has an account that is debited when purchases are made, and credited when sales are made.
Modern barter and trade has evolved considerably to become an effective method of increasing sales, conserving cash, moving inventory, and making use of excess production capacity for businesses around the world. Businesses in a barter earn trade credits (instead of cash) that are deposited into their account. They then have the ability to purchase goods and services from other members utilizing their trade credits – they are not obligated to purchase from those whom they sold to, and vice versa. The exchange plays an important role because they provide the record-keeping, brokering expertise and monthly statements to each member. Commercial exchanges make money by charging a commission on each transaction either all on the buy side, all on the sell side, or a combination of both. Transaction fees typically run between 8 and 15%. A successful example is International Monetary Systems, which was founded in 1985 and is one of the first exchanges in North America opened after the TEFRA Act of 1982.
Organized Barter (Retail Barter)
Since the 1930s, organized barter has been a common type of barter where company's join a barter organization (barter company) which serves as a hub to exchange goods and services without money as a medium of exchange. Similarly to brokerage houses, barter company facilitates the exchange of goods and services between member companies, allowing members to acquire goods and services by providing their own as payment. Member companies are required to sign a barter agreement with the barter company as a condition of their membership. In turn, the barter company provides each member with the current levels of supply and demand for each good and service which can be purchased or sold in the system. These transactions are mediated by barter authorities of the member companies. The barter member companies can then acquire their desired goods or services from another member company within a predetermined time. Failure to deliver the good or service within the fixed time period results in the debt being settled in cash. Each member company pays an annual membership fee and purchase and sales commission outlined in the contract. Organized barter increases liquidity for member companies as it mitigates the requirement of cash to settle transactions, enabling sales and purchases to be made with excess capacity or surplus inventory. Additionally, organized barter facilitates competitive advantage within industries and sectors. Considering the quantity of transactions depending on the supply-demand balance of the goods and services within the barter organization, member companies tend to face minimal competition within their own operating sector.
Corporate Barter
Producers, wholesalers and distributors tend to engage in corporate barter as a method of exchanging goods and services with companies they are in business with. These bilateral barter transactions are targeted towards companies aiming to convert stagnant inventories into receivable goods or services, to increase market share without cash investments, and to protect liquidity. However, issues arise as to the imbalance of supply and demand of desired goods and services and the inability to efficiently match the value of goods and services exchanged in these transactions.
Labour notes
The Owenite socialists in Britain and the United States in the 1830s were the first to attempt to organize barter exchanges. Owenism developed a "theory of equitable exchange" as a critique of the exploitative wage relationship between capitalist and labourer, by which all profit accrued to the capitalist. To counteract the uneven playing field between employers and employed, they proposed "schemes of labour notes based on labour time, thus institutionalizing Owen's demand that human labour, not money, be made the standard of value." This alternate currency eliminated price variability between markets, as well as the role of merchants who bought low and sold high. The system arose in a period where paper currency was an innovation. Paper currency was an IOU circulated by a bank (a promise to pay, not a payment in itself). Both merchants and an unstable paper currency created difficulties for direct producers.
An alternate currency, denominated in labour time, would prevent profit taking by middlemen; all goods exchanged would be priced only in terms of the amount of labour that went into them as expressed in the maxim 'Cost the limit of price'. It became the basis of exchanges in London, and in America, where the idea was implemented at the New Harmony communal settlement by Josiah Warren in 1826, and in his Cincinnati 'Time store' in 1827. Warren ideas were adopted by other Owenites and currency reformers, even though the labour exchanges were relatively short lived.
In England, about 30 to 40 cooperative societies sent their surplus goods to an "exchange bazaar" for direct barter in London, which later adopted a similar labour note. The British Association for Promoting Cooperative Knowledge established an "equitable labour exchange" in 1830. This was expanded as the National Equitable Labour Exchange in 1832 on Grays Inn Road in London. These efforts became the basis of the British cooperative movement of the 1840s. In 1848, the socialist and first self-designated anarchist Pierre-Joseph Proudhon postulated a system of time chits.
Michael Linton this originated the term "local exchange trading system" (LETS) in 1983 and for a time ran the Comox Valley LETSystems in Courtenay, British Columbia. LETS networks use interest-free local credit so direct swaps do not need to be made. For instance, a member may earn credit by doing childcare for one person and spend it later on carpentry with another person in the same network. In LETS, unlike other local currencies, no scrip is issued, but rather transactions are recorded in a central location open to all members. As credit is issued by the network members, for the benefit of the members themselves, LETS are considered mutual credit systems.
Local currencies
The first exchange system was the Swiss WIR Bank. It was founded in 1934 as a result of currency shortages after the stock market crash of 1929. "WIR" is both an abbreviation of Wirtschaftsring (economic circle) and the word for "we" in German, reminding participants that the economic circle is also a community.
In Australia and New Zealand, the largest barter exchange is Bartercard, founded in 1991, with offices in the United Kingdom, United States, Cyprus, UAE, Thailand, and most recently, South Africa. Other than its name suggests, it uses an electronic local currency, the trade dollar. Since its inception, Bartercard has amassed a trading value of over US$10 billion, and increased its customer network to 35,000 cardholders.
Bartering in business
In business, barter has the benefit that one gets to know each other, one discourages investments for rent (which is inefficient) and one can impose trade sanctions on dishonest partners.
According to the International Reciprocal Trade Association, the industry trade body, more than 450,000 businesses transacted $10 billion globally in 2008 – and officials expect trade volume to grow by 15% in 2009.
It is estimated that over 450,000 businesses in the United States were involved in barter exchange activities in 2010. There are approximately 400 commercial and corporate barter companies serving all parts of the world. There are many opportunities for entrepreneurs to start a barter exchange. Several major cities in the U.S. and Canada do not currently have a local barter exchange. There are two industry groups in the United States, the National Association of Trade Exchanges (NATE) and the International Reciprocal Trade Association (IRTA). Both offer training and promote high ethical standards among their members. Moreover, each has created its own currency through which its member barter companies can trade. NATE's currency is known as the BANC and IRTA's currency is called Universal Currency (UC).
In Canada, barter continues to thrive. The largest b2b barter exchange is International Monetary Systems (IMS Barter), founded in 1985. P2P bartering has seen a renaissance in major Canadian cities through Bunz - built as a network of Facebook groups that went on to become a stand-alone bartering based app in January 2016. Within the first year, Bunz accumulated over 75,000 users in over 200 cities worldwide.
Corporate barter focuses on larger transactions, which is different from a traditional, retail oriented barter exchange. Corporate barter exchanges typically use media and advertising as leverage for their larger transactions. It entails the use of a currency unit called a "trade-credit". The trade-credit must not only be known and guaranteed but also be valued in an amount the media and advertising could have been purchased for had the "client" bought it themselves (contract to eliminate ambiguity and risk).
Soviet bilateral trade is occasionally called "barter trade", because although the purchases were denominated in U.S. dollars, the transactions were credited to an international clearing account, avoiding the use of hard cash.
Tax implications
In the United States, Karl Hess used bartering to make it harder for the IRS to seize his wages and as a form of tax resistance. Hess explained how he turned to barter in an op-ed for The New York Times in 1975. However the IRS now requires barter exchanges to be reported as per the Tax Equity and Fiscal Responsibility Act of 1982. Barter exchanges are considered taxable revenue by the IRS and must be reported on a 1099-B form. According to the IRS, "The fair market value of goods and services exchanged must be included in the income of both parties."
Other countries, though, do not have the reporting requirement that the U.S. does concerning proceeds from barter transactions, but taxation is handled the same way as a cash transaction. If one barters for a profit, one pays the appropriate tax; if one generates a loss in the transaction, they have a loss. Bartering for business is also taxed accordingly as business income or business expense. Many barter exchanges require that one register as a business.
In countries like Australia and New Zealand, barter transactions require the appropriate tax invoices declaring the value of the transaction and its reciprocal GST component. All records of barter transactions must also be kept for a minimum of five years after the transaction is made.
Recent developments
In Spain (particularly the Catalonia region) there is a growing number of exchange markets. These barter markets or swap meets work without money. Participants bring things they do not need and exchange them for the unwanted goods of another participant. Swapping among three parties often helps satisfy tastes when trying to get around the rule that money is not allowed.
Other examples are El Cambalache in San Cristobal de las Casas, Chiapas, Mexico and post-Soviet societies.
The recent blockchain technologies are making it possible to implement decentralized and autonomous barter exchanges that can be used by crowds on a massive scale. BarterMachine is an Ethereum smart contract based system that allows direct exchange of multiple types and quantities of tokens with others. It also provides a solution miner that allows users to compute direct bartering solutions in their browsers. Bartering solutions can be submitted to BarterMachine which will perform collective transfer of tokens among the blockchain addresses that belong to the users. If there are excess tokens left after the requirements of the users are satisfied, the leftover tokens will be given as reward to the solution miner.
See also
Collaborative consumption
Complementary currencies
Gift economy
International trade
List of international trade topics
Local exchange trading system
Natural economy
Private currency
Property caretaker
Quid pro quo
Simple living
Trading cards
Time banking
References
External links
Business terms
Cashless society
Economic systems
Pricing
Simple living
Tax avoidance
Trade
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.